Test Report: Docker_Linux_containerd_arm64 21923

                    
                      0ff1edca1acc03f8c3eb691c9cf9caebdbe6133d:2025-11-20:42417
                    
                

Test fail (4/333)

Order failed test Duration
301 TestStartStop/group/old-k8s-version/serial/DeployApp 13.77
314 TestStartStop/group/no-preload/serial/DeployApp 14.2
317 TestStartStop/group/embed-certs/serial/DeployApp 14.18
341 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 16.61
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (13.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-023521 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9efbd2b5-b6e4-4170-a68d-a23aed850439] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9efbd2b5-b6e4-4170-a68d-a23aed850439] Running
E1120 21:08:58.759710    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/addons-657501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004015678s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-023521 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-023521
helpers_test.go:243: (dbg) docker inspect old-k8s-version-023521:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74636f055b722c8519558648b559d62370a6961b557856e2a138e817c51bed49",
	        "Created": "2025-11-20T21:07:56.631557256Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 204921,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:07:56.700927153Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/74636f055b722c8519558648b559d62370a6961b557856e2a138e817c51bed49/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74636f055b722c8519558648b559d62370a6961b557856e2a138e817c51bed49/hostname",
	        "HostsPath": "/var/lib/docker/containers/74636f055b722c8519558648b559d62370a6961b557856e2a138e817c51bed49/hosts",
	        "LogPath": "/var/lib/docker/containers/74636f055b722c8519558648b559d62370a6961b557856e2a138e817c51bed49/74636f055b722c8519558648b559d62370a6961b557856e2a138e817c51bed49-json.log",
	        "Name": "/old-k8s-version-023521",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-023521:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-023521",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "74636f055b722c8519558648b559d62370a6961b557856e2a138e817c51bed49",
	                "LowerDir": "/var/lib/docker/overlay2/1a6c29fcf2b65f0a5aba098c27207f076274729c865583cb75f14e3f5e8e9d13-init/diff:/var/lib/docker/overlay2/5105da773b59b243b777c3c083d206b6a741bd11ebc5a0283799917fe36ebbb2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1a6c29fcf2b65f0a5aba098c27207f076274729c865583cb75f14e3f5e8e9d13/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1a6c29fcf2b65f0a5aba098c27207f076274729c865583cb75f14e3f5e8e9d13/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1a6c29fcf2b65f0a5aba098c27207f076274729c865583cb75f14e3f5e8e9d13/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-023521",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-023521/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-023521",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-023521",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-023521",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "da77aa3a20347ff46dedc0a9ef78336dcc8e064623662f9b41628cae013296ea",
	            "SandboxKey": "/var/run/docker/netns/da77aa3a2034",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-023521": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:2d:84:c5:da:ec",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ca65c34920334c24d3f55f43079525c2100ac74d43715bed2327162f4af2415f",
	                    "EndpointID": "a6998ae74ea0e5ddb5cb609286800ef77944cf6d7685c33f3bf163f013650054",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-023521",
	                        "74636f055b72"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-023521 -n old-k8s-version-023521
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-023521 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-023521 logs -n 25: (1.205069815s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-448616 sudo docker system info                                                                                                                                                                                                            │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │                     │
	│ ssh     │ -p cilium-448616 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                           │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │                     │
	│ ssh     │ -p cilium-448616 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │                     │
	│ ssh     │ -p cilium-448616 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │                     │
	│ ssh     │ -p cilium-448616 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │                     │
	│ ssh     │ -p cilium-448616 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │                     │
	│ ssh     │ -p cilium-448616 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │                     │
	│ ssh     │ -p cilium-448616 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │                     │
	│ ssh     │ -p cilium-448616 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │                     │
	│ ssh     │ -p cilium-448616 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │                     │
	│ ssh     │ -p cilium-448616 sudo containerd config dump                                                                                                                                                                                                        │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │                     │
	│ ssh     │ -p cilium-448616 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │                     │
	│ ssh     │ -p cilium-448616 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │                     │
	│ ssh     │ -p cilium-448616 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-982573                                                                                                                                                                                                                        │ kubernetes-upgrade-982573 │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │ 20 Nov 25 21:06 UTC │
	│ delete  │ -p cilium-448616                                                                                                                                                                                                                                    │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │ 20 Nov 25 21:06 UTC │
	│ start   │ -p force-systemd-env-444240 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-444240  │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │ 20 Nov 25 21:07 UTC │
	│ start   │ -p cert-expiration-339813 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-339813    │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │ 20 Nov 25 21:07 UTC │
	│ ssh     │ force-systemd-env-444240 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-444240  │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ delete  │ -p force-systemd-env-444240                                                                                                                                                                                                                         │ force-systemd-env-444240  │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ start   │ -p cert-options-530158 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-530158       │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ ssh     │ cert-options-530158 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-530158       │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ ssh     │ -p cert-options-530158 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-530158       │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ delete  │ -p cert-options-530158                                                                                                                                                                                                                              │ cert-options-530158       │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ start   │ -p old-k8s-version-023521 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-023521    │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:08 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:07:50
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:07:50.345634  204529 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:07:50.345833  204529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:07:50.345859  204529 out.go:374] Setting ErrFile to fd 2...
	I1120 21:07:50.345881  204529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:07:50.346371  204529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-2300/.minikube/bin
	I1120 21:07:50.347217  204529 out.go:368] Setting JSON to false
	I1120 21:07:50.348406  204529 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3020,"bootTime":1763669851,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1120 21:07:50.348524  204529 start.go:143] virtualization:  
	I1120 21:07:50.352212  204529 out.go:179] * [old-k8s-version-023521] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 21:07:50.356524  204529 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:07:50.356581  204529 notify.go:221] Checking for updates...
	I1120 21:07:50.363361  204529 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:07:50.366491  204529 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-2300/kubeconfig
	I1120 21:07:50.369561  204529 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-2300/.minikube
	I1120 21:07:50.373535  204529 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 21:07:50.376604  204529 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:07:50.380233  204529 config.go:182] Loaded profile config "cert-expiration-339813": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 21:07:50.380345  204529 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:07:50.413179  204529 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 21:07:50.413309  204529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:07:50.474237  204529 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 21:07:50.464908521 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:07:50.474343  204529 docker.go:319] overlay module found
	I1120 21:07:50.479656  204529 out.go:179] * Using the docker driver based on user configuration
	I1120 21:07:50.482678  204529 start.go:309] selected driver: docker
	I1120 21:07:50.482701  204529 start.go:930] validating driver "docker" against <nil>
	I1120 21:07:50.482716  204529 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:07:50.483457  204529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:07:50.571864  204529 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 21:07:50.55798512 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:07:50.572017  204529 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 21:07:50.572283  204529 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:07:50.575452  204529 out.go:179] * Using Docker driver with root privileges
	I1120 21:07:50.578296  204529 cni.go:84] Creating CNI manager for ""
	I1120 21:07:50.578368  204529 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 21:07:50.578383  204529 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1120 21:07:50.578588  204529 start.go:353] cluster config:
	{Name:old-k8s-version-023521 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-023521 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:07:50.581769  204529 out.go:179] * Starting "old-k8s-version-023521" primary control-plane node in "old-k8s-version-023521" cluster
	I1120 21:07:50.584601  204529 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1120 21:07:50.587646  204529 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:07:50.590630  204529 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1120 21:07:50.590686  204529 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-2300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1120 21:07:50.590697  204529 cache.go:65] Caching tarball of preloaded images
	I1120 21:07:50.590819  204529 preload.go:238] Found /home/jenkins/minikube-integration/21923-2300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1120 21:07:50.590835  204529 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1120 21:07:50.590975  204529 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/config.json ...
	I1120 21:07:50.591006  204529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/config.json: {Name:mkc1e1e459ad5ad023bc0c29174f23ee97f50186 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:07:50.591183  204529 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:07:50.612665  204529 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:07:50.612689  204529 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:07:50.612707  204529 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:07:50.612731  204529 start.go:360] acquireMachinesLock for old-k8s-version-023521: {Name:mkc267f5cb7af210c91e5bd6be69f432227b9fc9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:07:50.612836  204529 start.go:364] duration metric: took 86.361µs to acquireMachinesLock for "old-k8s-version-023521"
	I1120 21:07:50.612868  204529 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-023521 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-023521 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1120 21:07:50.612944  204529 start.go:125] createHost starting for "" (driver="docker")
	I1120 21:07:50.618216  204529 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1120 21:07:50.618473  204529 start.go:159] libmachine.API.Create for "old-k8s-version-023521" (driver="docker")
	I1120 21:07:50.618516  204529 client.go:173] LocalClient.Create starting
	I1120 21:07:50.618589  204529 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem
	I1120 21:07:50.618629  204529 main.go:143] libmachine: Decoding PEM data...
	I1120 21:07:50.618655  204529 main.go:143] libmachine: Parsing certificate...
	I1120 21:07:50.618711  204529 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-2300/.minikube/certs/cert.pem
	I1120 21:07:50.618734  204529 main.go:143] libmachine: Decoding PEM data...
	I1120 21:07:50.618759  204529 main.go:143] libmachine: Parsing certificate...
	I1120 21:07:50.619133  204529 cli_runner.go:164] Run: docker network inspect old-k8s-version-023521 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1120 21:07:50.634889  204529 cli_runner.go:211] docker network inspect old-k8s-version-023521 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1120 21:07:50.634980  204529 network_create.go:284] running [docker network inspect old-k8s-version-023521] to gather additional debugging logs...
	I1120 21:07:50.635051  204529 cli_runner.go:164] Run: docker network inspect old-k8s-version-023521
	W1120 21:07:50.651755  204529 cli_runner.go:211] docker network inspect old-k8s-version-023521 returned with exit code 1
	I1120 21:07:50.651786  204529 network_create.go:287] error running [docker network inspect old-k8s-version-023521]: docker network inspect old-k8s-version-023521: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-023521 not found
	I1120 21:07:50.651817  204529 network_create.go:289] output of [docker network inspect old-k8s-version-023521]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-023521 not found
	
	** /stderr **
	I1120 21:07:50.651919  204529 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:07:50.669365  204529 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8f2399b7fac6 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:ce:e1:0f:d8:b1} reservation:<nil>}
	I1120 21:07:50.669711  204529 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-954bfb8e5d57 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:f3:60:ee:cc:b7} reservation:<nil>}
	I1120 21:07:50.670049  204529 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-02e4726a397e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:f0:04:c7:8f:fa} reservation:<nil>}
	I1120 21:07:50.670319  204529 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4845adc70ff8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:82:55:55:33:b2:ff} reservation:<nil>}
	I1120 21:07:50.670866  204529 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a20740}
	I1120 21:07:50.670900  204529 network_create.go:124] attempt to create docker network old-k8s-version-023521 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1120 21:07:50.670956  204529 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-023521 old-k8s-version-023521
	I1120 21:07:50.746550  204529 network_create.go:108] docker network old-k8s-version-023521 192.168.85.0/24 created
	I1120 21:07:50.746584  204529 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-023521" container
	I1120 21:07:50.746655  204529 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1120 21:07:50.765587  204529 cli_runner.go:164] Run: docker volume create old-k8s-version-023521 --label name.minikube.sigs.k8s.io=old-k8s-version-023521 --label created_by.minikube.sigs.k8s.io=true
	I1120 21:07:50.784152  204529 oci.go:103] Successfully created a docker volume old-k8s-version-023521
	I1120 21:07:50.784238  204529 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-023521-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-023521 --entrypoint /usr/bin/test -v old-k8s-version-023521:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1120 21:07:51.391328  204529 oci.go:107] Successfully prepared a docker volume old-k8s-version-023521
	I1120 21:07:51.391403  204529 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1120 21:07:51.391415  204529 kic.go:194] Starting extracting preloaded images to volume ...
	I1120 21:07:51.391482  204529 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-2300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-023521:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1120 21:07:56.551620  204529 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-2300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-023521:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (5.160099028s)
	I1120 21:07:56.551655  204529 kic.go:203] duration metric: took 5.160234866s to extract preloaded images to volume ...
	W1120 21:07:56.551803  204529 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1120 21:07:56.551933  204529 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1120 21:07:56.616896  204529 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-023521 --name old-k8s-version-023521 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-023521 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-023521 --network old-k8s-version-023521 --ip 192.168.85.2 --volume old-k8s-version-023521:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1120 21:07:56.933091  204529 cli_runner.go:164] Run: docker container inspect old-k8s-version-023521 --format={{.State.Running}}
	I1120 21:07:56.957220  204529 cli_runner.go:164] Run: docker container inspect old-k8s-version-023521 --format={{.State.Status}}
	I1120 21:07:56.976667  204529 cli_runner.go:164] Run: docker exec old-k8s-version-023521 stat /var/lib/dpkg/alternatives/iptables
	I1120 21:07:57.030912  204529 oci.go:144] the created container "old-k8s-version-023521" has a running status.
	I1120 21:07:57.030964  204529 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21923-2300/.minikube/machines/old-k8s-version-023521/id_rsa...
	I1120 21:07:57.153076  204529 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21923-2300/.minikube/machines/old-k8s-version-023521/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1120 21:07:57.178236  204529 cli_runner.go:164] Run: docker container inspect old-k8s-version-023521 --format={{.State.Status}}
	I1120 21:07:57.197178  204529 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1120 21:07:57.197201  204529 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-023521 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1120 21:07:57.252554  204529 cli_runner.go:164] Run: docker container inspect old-k8s-version-023521 --format={{.State.Status}}
	I1120 21:07:57.280913  204529 machine.go:94] provisionDockerMachine start ...
	I1120 21:07:57.281000  204529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023521
	I1120 21:07:57.304546  204529 main.go:143] libmachine: Using SSH client type: native
	I1120 21:07:57.305176  204529 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1120 21:07:57.305193  204529 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:07:57.306065  204529 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 21:08:00.667758  204529 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-023521
	
	I1120 21:08:00.667786  204529 ubuntu.go:182] provisioning hostname "old-k8s-version-023521"
	I1120 21:08:00.667859  204529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023521
	I1120 21:08:00.695623  204529 main.go:143] libmachine: Using SSH client type: native
	I1120 21:08:00.696023  204529 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1120 21:08:00.696046  204529 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-023521 && echo "old-k8s-version-023521" | sudo tee /etc/hostname
	I1120 21:08:00.873397  204529 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-023521
	
	I1120 21:08:00.873491  204529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023521
	I1120 21:08:00.892468  204529 main.go:143] libmachine: Using SSH client type: native
	I1120 21:08:00.892832  204529 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1120 21:08:00.892857  204529 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-023521' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-023521/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-023521' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:08:01.034881  204529 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:08:01.034919  204529 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-2300/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-2300/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-2300/.minikube}
	I1120 21:08:01.034942  204529 ubuntu.go:190] setting up certificates
	I1120 21:08:01.034951  204529 provision.go:84] configureAuth start
	I1120 21:08:01.035023  204529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-023521
	I1120 21:08:01.053079  204529 provision.go:143] copyHostCerts
	I1120 21:08:01.053145  204529 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-2300/.minikube/ca.pem, removing ...
	I1120 21:08:01.053154  204529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-2300/.minikube/ca.pem
	I1120 21:08:01.053233  204529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-2300/.minikube/ca.pem (1078 bytes)
	I1120 21:08:01.053331  204529 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-2300/.minikube/cert.pem, removing ...
	I1120 21:08:01.053336  204529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-2300/.minikube/cert.pem
	I1120 21:08:01.053363  204529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-2300/.minikube/cert.pem (1123 bytes)
	I1120 21:08:01.053419  204529 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-2300/.minikube/key.pem, removing ...
	I1120 21:08:01.053423  204529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-2300/.minikube/key.pem
	I1120 21:08:01.053446  204529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-2300/.minikube/key.pem (1675 bytes)
	I1120 21:08:01.053492  204529 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-2300/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-023521 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-023521]
	I1120 21:08:01.879091  204529 provision.go:177] copyRemoteCerts
	I1120 21:08:01.879157  204529 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:08:01.879196  204529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023521
	I1120 21:08:01.895728  204529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/old-k8s-version-023521/id_rsa Username:docker}
	I1120 21:08:01.998378  204529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 21:08:02.020748  204529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:08:02.041845  204529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1120 21:08:02.061523  204529 provision.go:87] duration metric: took 1.026540108s to configureAuth
	I1120 21:08:02.061552  204529 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:08:02.061744  204529 config.go:182] Loaded profile config "old-k8s-version-023521": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1120 21:08:02.061758  204529 machine.go:97] duration metric: took 4.780828142s to provisionDockerMachine
	I1120 21:08:02.061766  204529 client.go:176] duration metric: took 11.44324076s to LocalClient.Create
	I1120 21:08:02.061780  204529 start.go:167] duration metric: took 11.443308248s to libmachine.API.Create "old-k8s-version-023521"
	I1120 21:08:02.061792  204529 start.go:293] postStartSetup for "old-k8s-version-023521" (driver="docker")
	I1120 21:08:02.061801  204529 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:08:02.061865  204529 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:08:02.061908  204529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023521
	I1120 21:08:02.079989  204529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/old-k8s-version-023521/id_rsa Username:docker}
	I1120 21:08:02.182770  204529 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:08:02.186031  204529 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:08:02.186060  204529 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:08:02.186075  204529 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-2300/.minikube/addons for local assets ...
	I1120 21:08:02.186130  204529 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-2300/.minikube/files for local assets ...
	I1120 21:08:02.186239  204529 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-2300/.minikube/files/etc/ssl/certs/40892.pem -> 40892.pem in /etc/ssl/certs
	I1120 21:08:02.186360  204529 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:08:02.194423  204529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/files/etc/ssl/certs/40892.pem --> /etc/ssl/certs/40892.pem (1708 bytes)
	I1120 21:08:02.212897  204529 start.go:296] duration metric: took 151.089798ms for postStartSetup
	I1120 21:08:02.213266  204529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-023521
	I1120 21:08:02.230542  204529 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/config.json ...
	I1120 21:08:02.230833  204529 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:08:02.230895  204529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023521
	I1120 21:08:02.248108  204529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/old-k8s-version-023521/id_rsa Username:docker}
	I1120 21:08:02.351867  204529 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:08:02.356717  204529 start.go:128] duration metric: took 11.743757855s to createHost
	I1120 21:08:02.356741  204529 start.go:83] releasing machines lock for "old-k8s-version-023521", held for 11.743890657s
	I1120 21:08:02.356816  204529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-023521
	I1120 21:08:02.373836  204529 ssh_runner.go:195] Run: cat /version.json
	I1120 21:08:02.373879  204529 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:08:02.373889  204529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023521
	I1120 21:08:02.373947  204529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023521
	I1120 21:08:02.398646  204529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/old-k8s-version-023521/id_rsa Username:docker}
	I1120 21:08:02.406971  204529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/old-k8s-version-023521/id_rsa Username:docker}
	I1120 21:08:02.614261  204529 ssh_runner.go:195] Run: systemctl --version
	I1120 21:08:02.620650  204529 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:08:02.624853  204529 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:08:02.624975  204529 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:08:02.654155  204529 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1120 21:08:02.654183  204529 start.go:496] detecting cgroup driver to use...
	I1120 21:08:02.654225  204529 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:08:02.654285  204529 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1120 21:08:02.672603  204529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1120 21:08:02.685852  204529 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:08:02.685924  204529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:08:02.704782  204529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:08:02.724442  204529 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:08:02.849826  204529 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:08:02.975618  204529 docker.go:234] disabling docker service ...
	I1120 21:08:02.975699  204529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:08:03.003115  204529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:08:03.022268  204529 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:08:03.149241  204529 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:08:03.262151  204529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:08:03.275711  204529 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:08:03.290427  204529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1120 21:08:03.300720  204529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1120 21:08:03.311036  204529 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1120 21:08:03.311112  204529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1120 21:08:03.320809  204529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1120 21:08:03.330581  204529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1120 21:08:03.340504  204529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1120 21:08:03.350367  204529 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:08:03.359273  204529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1120 21:08:03.369391  204529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1120 21:08:03.379353  204529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1120 21:08:03.389353  204529 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:08:03.397532  204529 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:08:03.405309  204529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:08:03.523858  204529 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1120 21:08:03.641572  204529 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1120 21:08:03.641711  204529 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1120 21:08:03.647078  204529 start.go:564] Will wait 60s for crictl version
	I1120 21:08:03.647201  204529 ssh_runner.go:195] Run: which crictl
	I1120 21:08:03.650913  204529 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:08:03.677062  204529 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1120 21:08:03.677200  204529 ssh_runner.go:195] Run: containerd --version
	I1120 21:08:03.698823  204529 ssh_runner.go:195] Run: containerd --version
	I1120 21:08:03.728272  204529 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1120 21:08:03.731279  204529 cli_runner.go:164] Run: docker network inspect old-k8s-version-023521 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:08:03.752768  204529 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1120 21:08:03.756981  204529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:08:03.768542  204529 kubeadm.go:884] updating cluster {Name:old-k8s-version-023521 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-023521 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:08:03.768664  204529 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1120 21:08:03.768725  204529 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:08:03.798631  204529 containerd.go:627] all images are preloaded for containerd runtime.
	I1120 21:08:03.798657  204529 containerd.go:534] Images already preloaded, skipping extraction
	I1120 21:08:03.798715  204529 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:08:03.823336  204529 containerd.go:627] all images are preloaded for containerd runtime.
	I1120 21:08:03.823361  204529 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:08:03.823368  204529 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 containerd true true} ...
	I1120 21:08:03.823464  204529 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-023521 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-023521 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:08:03.823529  204529 ssh_runner.go:195] Run: sudo crictl info
	I1120 21:08:03.853097  204529 cni.go:84] Creating CNI manager for ""
	I1120 21:08:03.853123  204529 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 21:08:03.853140  204529 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 21:08:03.853165  204529 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-023521 NodeName:old-k8s-version-023521 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:08:03.853295  204529 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-023521"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:08:03.853372  204529 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1120 21:08:03.862188  204529 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:08:03.862269  204529 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 21:08:03.870636  204529 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1120 21:08:03.884779  204529 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:08:03.897983  204529 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I1120 21:08:03.911141  204529 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1120 21:08:03.914894  204529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:08:03.924473  204529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:08:04.047274  204529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:08:04.064190  204529 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521 for IP: 192.168.85.2
	I1120 21:08:04.064214  204529 certs.go:195] generating shared ca certs ...
	I1120 21:08:04.064231  204529 certs.go:227] acquiring lock for ca certs: {Name:mke329f4cdcc6bfc142b6fc6817600b3d33b3062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:08:04.064462  204529 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-2300/.minikube/ca.key
	I1120 21:08:04.064538  204529 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-2300/.minikube/proxy-client-ca.key
	I1120 21:08:04.064552  204529 certs.go:257] generating profile certs ...
	I1120 21:08:04.064630  204529 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/client.key
	I1120 21:08:04.064647  204529 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/client.crt with IP's: []
	I1120 21:08:04.910465  204529 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/client.crt ...
	I1120 21:08:04.910494  204529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/client.crt: {Name:mk91938e9ba5fb02364a12aaf04b0ffb15ea019d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:08:04.910728  204529 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/client.key ...
	I1120 21:08:04.910747  204529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/client.key: {Name:mk1f1da68502c1749c7086e6b0698c1b1aa7f221 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:08:04.910877  204529 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/apiserver.key.7434d226
	I1120 21:08:04.910897  204529 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/apiserver.crt.7434d226 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1120 21:08:05.468326  204529 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/apiserver.crt.7434d226 ...
	I1120 21:08:05.468357  204529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/apiserver.crt.7434d226: {Name:mk8974d363dc793e36fff94558f68e72867f5c2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:08:05.468591  204529 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/apiserver.key.7434d226 ...
	I1120 21:08:05.468612  204529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/apiserver.key.7434d226: {Name:mk0cc6a078e63d977d1ac01112d497bfd84610fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:08:05.468698  204529 certs.go:382] copying /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/apiserver.crt.7434d226 -> /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/apiserver.crt
	I1120 21:08:05.468781  204529 certs.go:386] copying /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/apiserver.key.7434d226 -> /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/apiserver.key
	I1120 21:08:05.468847  204529 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/proxy-client.key
	I1120 21:08:05.468866  204529 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/proxy-client.crt with IP's: []
	I1120 21:08:05.977539  204529 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/proxy-client.crt ...
	I1120 21:08:05.977573  204529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/proxy-client.crt: {Name:mk0aedb75b239b869f74169d43558f34de042867 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:08:05.977757  204529 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/proxy-client.key ...
	I1120 21:08:05.977771  204529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/proxy-client.key: {Name:mk78805164a04040818958ecee14d66a101c45ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:08:05.977971  204529 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/4089.pem (1338 bytes)
	W1120 21:08:05.978014  204529 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-2300/.minikube/certs/4089_empty.pem, impossibly tiny 0 bytes
	I1120 21:08:05.978023  204529 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:08:05.978049  204529 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:08:05.978079  204529 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:08:05.978105  204529 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/key.pem (1675 bytes)
	I1120 21:08:05.978151  204529 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/files/etc/ssl/certs/40892.pem (1708 bytes)
	I1120 21:08:05.978737  204529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:08:06.001395  204529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:08:06.027046  204529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:08:06.048561  204529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 21:08:06.068034  204529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1120 21:08:06.086611  204529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 21:08:06.105155  204529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:08:06.122769  204529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:08:06.140412  204529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/certs/4089.pem --> /usr/share/ca-certificates/4089.pem (1338 bytes)
	I1120 21:08:06.158599  204529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/files/etc/ssl/certs/40892.pem --> /usr/share/ca-certificates/40892.pem (1708 bytes)
	I1120 21:08:06.177626  204529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:08:06.197173  204529 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:08:06.210213  204529 ssh_runner.go:195] Run: openssl version
	I1120 21:08:06.216414  204529 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4089.pem
	I1120 21:08:06.223563  204529 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4089.pem /etc/ssl/certs/4089.pem
	I1120 21:08:06.231172  204529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4089.pem
	I1120 21:08:06.235004  204529 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:28 /usr/share/ca-certificates/4089.pem
	I1120 21:08:06.235068  204529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4089.pem
	I1120 21:08:06.277528  204529 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:08:06.292885  204529 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4089.pem /etc/ssl/certs/51391683.0
	I1120 21:08:06.303578  204529 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/40892.pem
	I1120 21:08:06.312363  204529 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/40892.pem /etc/ssl/certs/40892.pem
	I1120 21:08:06.320702  204529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/40892.pem
	I1120 21:08:06.325945  204529 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:28 /usr/share/ca-certificates/40892.pem
	I1120 21:08:06.326092  204529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/40892.pem
	I1120 21:08:06.367814  204529 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:08:06.375428  204529 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/40892.pem /etc/ssl/certs/3ec20f2e.0
	I1120 21:08:06.383039  204529 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:08:06.391106  204529 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:08:06.398582  204529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:08:06.402146  204529 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:21 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:08:06.402211  204529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:08:06.443508  204529 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:08:06.450885  204529 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 21:08:06.459499  204529 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:08:06.464090  204529 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 21:08:06.464152  204529 kubeadm.go:401] StartCluster: {Name:old-k8s-version-023521 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-023521 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:08:06.464225  204529 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1120 21:08:06.464287  204529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:08:06.493270  204529 cri.go:89] found id: ""
	I1120 21:08:06.493345  204529 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:08:06.501574  204529 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 21:08:06.509345  204529 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1120 21:08:06.509461  204529 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 21:08:06.517463  204529 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 21:08:06.517484  204529 kubeadm.go:158] found existing configuration files:
	
	I1120 21:08:06.517536  204529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1120 21:08:06.526094  204529 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 21:08:06.526161  204529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 21:08:06.534077  204529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1120 21:08:06.542241  204529 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 21:08:06.542322  204529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 21:08:06.549861  204529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1120 21:08:06.557672  204529 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 21:08:06.557736  204529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 21:08:06.565472  204529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1120 21:08:06.573804  204529 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 21:08:06.573869  204529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 21:08:06.581411  204529 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1120 21:08:06.630756  204529 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1120 21:08:06.631017  204529 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 21:08:06.667817  204529 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1120 21:08:06.667894  204529 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1120 21:08:06.667938  204529 kubeadm.go:319] OS: Linux
	I1120 21:08:06.667991  204529 kubeadm.go:319] CGROUPS_CPU: enabled
	I1120 21:08:06.668051  204529 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1120 21:08:06.668103  204529 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1120 21:08:06.668157  204529 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1120 21:08:06.668220  204529 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1120 21:08:06.668275  204529 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1120 21:08:06.668326  204529 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1120 21:08:06.668381  204529 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1120 21:08:06.668433  204529 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1120 21:08:06.771301  204529 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 21:08:06.771420  204529 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 21:08:06.771522  204529 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1120 21:08:06.931381  204529 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 21:08:06.936972  204529 out.go:252]   - Generating certificates and keys ...
	I1120 21:08:06.937120  204529 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 21:08:06.937207  204529 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 21:08:08.348613  204529 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 21:08:08.723144  204529 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 21:08:09.175139  204529 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 21:08:09.914815  204529 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 21:08:10.989838  204529 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 21:08:10.990523  204529 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-023521] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1120 21:08:11.491090  204529 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 21:08:11.491471  204529 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-023521] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1120 21:08:11.723783  204529 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 21:08:12.537901  204529 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 21:08:12.685279  204529 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 21:08:12.685497  204529 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 21:08:13.304430  204529 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 21:08:13.713286  204529 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 21:08:14.961129  204529 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 21:08:15.207439  204529 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 21:08:15.208397  204529 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 21:08:15.211425  204529 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1120 21:08:15.217259  204529 out.go:252]   - Booting up control plane ...
	I1120 21:08:15.217381  204529 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1120 21:08:15.217474  204529 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1120 21:08:15.217552  204529 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1120 21:08:15.245709  204529 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1120 21:08:15.247208  204529 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1120 21:08:15.247264  204529 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1120 21:08:15.394081  204529 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1120 21:08:22.896894  204529 kubeadm.go:319] [apiclient] All control plane components are healthy after 7.504915 seconds
	I1120 21:08:22.897034  204529 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 21:08:22.914664  204529 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 21:08:23.441125  204529 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 21:08:23.441344  204529 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-023521 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 21:08:23.953562  204529 kubeadm.go:319] [bootstrap-token] Using token: skflu2.9u6vb06ud6qxurxj
	I1120 21:08:23.956553  204529 out.go:252]   - Configuring RBAC rules ...
	I1120 21:08:23.956683  204529 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 21:08:23.961810  204529 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 21:08:23.971097  204529 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 21:08:23.976160  204529 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 21:08:23.980441  204529 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 21:08:23.987112  204529 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 21:08:24.004881  204529 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 21:08:24.306248  204529 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 21:08:24.392874  204529 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 21:08:24.396583  204529 kubeadm.go:319] 
	I1120 21:08:24.396672  204529 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 21:08:24.396680  204529 kubeadm.go:319] 
	I1120 21:08:24.396760  204529 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 21:08:24.396765  204529 kubeadm.go:319] 
	I1120 21:08:24.396797  204529 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 21:08:24.397414  204529 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 21:08:24.397493  204529 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 21:08:24.397499  204529 kubeadm.go:319] 
	I1120 21:08:24.397556  204529 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 21:08:24.397561  204529 kubeadm.go:319] 
	I1120 21:08:24.397616  204529 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 21:08:24.397621  204529 kubeadm.go:319] 
	I1120 21:08:24.397676  204529 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 21:08:24.397754  204529 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 21:08:24.397825  204529 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 21:08:24.397829  204529 kubeadm.go:319] 
	I1120 21:08:24.398173  204529 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 21:08:24.398262  204529 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 21:08:24.398267  204529 kubeadm.go:319] 
	I1120 21:08:24.398664  204529 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token skflu2.9u6vb06ud6qxurxj \
	I1120 21:08:24.398787  204529 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:0ba1c2e2176682abc636887486248c0692df9e5785bd62c1c27f34717ff2c43f \
	I1120 21:08:24.399372  204529 kubeadm.go:319] 	--control-plane 
	I1120 21:08:24.399384  204529 kubeadm.go:319] 
	I1120 21:08:24.399831  204529 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 21:08:24.399843  204529 kubeadm.go:319] 
	I1120 21:08:24.400222  204529 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token skflu2.9u6vb06ud6qxurxj \
	I1120 21:08:24.400587  204529 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:0ba1c2e2176682abc636887486248c0692df9e5785bd62c1c27f34717ff2c43f 
	I1120 21:08:24.406196  204529 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1120 21:08:24.406335  204529 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1120 21:08:24.406355  204529 cni.go:84] Creating CNI manager for ""
	I1120 21:08:24.406363  204529 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 21:08:24.409557  204529 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1120 21:08:24.412381  204529 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1120 21:08:24.416824  204529 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1120 21:08:24.416844  204529 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1120 21:08:24.449600  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1120 21:08:25.504866  204529 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.055190491s)
	I1120 21:08:25.504925  204529 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 21:08:25.505051  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:25.505126  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-023521 minikube.k8s.io/updated_at=2025_11_20T21_08_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=old-k8s-version-023521 minikube.k8s.io/primary=true
	I1120 21:08:25.548296  204529 ops.go:34] apiserver oom_adj: -16
	I1120 21:08:25.760504  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:26.260647  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:26.761517  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:27.261434  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:27.760627  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:28.261065  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:28.760637  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:29.261590  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:29.761521  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:30.260602  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:30.760869  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:31.260595  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:31.761004  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:32.261415  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:32.760660  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:33.261373  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:33.760638  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:34.260690  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:34.761337  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:35.261608  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:35.761183  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:36.261151  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:36.761439  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:37.261027  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:37.469787  204529 kubeadm.go:1114] duration metric: took 11.964777407s to wait for elevateKubeSystemPrivileges
	I1120 21:08:37.469831  204529 kubeadm.go:403] duration metric: took 31.005689899s to StartCluster
	I1120 21:08:37.469848  204529 settings.go:142] acquiring lock: {Name:mk8f1e96fadc1ef170d5eddc49033a884865c024 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:08:37.469922  204529 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-2300/kubeconfig
	I1120 21:08:37.471143  204529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/kubeconfig: {Name:mk7ea52a23a4d9fc2da4c68a59479b947db5281c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:08:37.471421  204529 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1120 21:08:37.471626  204529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 21:08:37.471976  204529 config.go:182] Loaded profile config "old-k8s-version-023521": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1120 21:08:37.472048  204529 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 21:08:37.472610  204529 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-023521"
	I1120 21:08:37.472647  204529 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-023521"
	I1120 21:08:37.472700  204529 host.go:66] Checking if "old-k8s-version-023521" exists ...
	I1120 21:08:37.473260  204529 cli_runner.go:164] Run: docker container inspect old-k8s-version-023521 --format={{.State.Status}}
	I1120 21:08:37.473492  204529 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-023521"
	I1120 21:08:37.473519  204529 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-023521"
	I1120 21:08:37.473859  204529 cli_runner.go:164] Run: docker container inspect old-k8s-version-023521 --format={{.State.Status}}
	I1120 21:08:37.477896  204529 out.go:179] * Verifying Kubernetes components...
	I1120 21:08:37.482822  204529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:08:37.533601  204529 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-023521"
	I1120 21:08:37.533643  204529 host.go:66] Checking if "old-k8s-version-023521" exists ...
	I1120 21:08:37.534175  204529 cli_runner.go:164] Run: docker container inspect old-k8s-version-023521 --format={{.State.Status}}
	I1120 21:08:37.536660  204529 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:08:37.540858  204529 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:08:37.540903  204529 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 21:08:37.540994  204529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023521
	I1120 21:08:37.586264  204529 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 21:08:37.586303  204529 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 21:08:37.586377  204529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023521
	I1120 21:08:37.602100  204529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/old-k8s-version-023521/id_rsa Username:docker}
	I1120 21:08:37.657634  204529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/old-k8s-version-023521/id_rsa Username:docker}
	I1120 21:08:37.810164  204529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 21:08:37.888519  204529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:08:37.917364  204529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:08:38.020901  204529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 21:08:38.916486  204529 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.106205164s)
	I1120 21:08:38.916512  204529 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1120 21:08:38.917736  204529 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.029121935s)
	I1120 21:08:38.918736  204529 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-023521" to be "Ready" ...
	I1120 21:08:39.085468  204529 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.064517521s)
	I1120 21:08:39.085801  204529 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.168301337s)
	I1120 21:08:39.113167  204529 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1120 21:08:39.116067  204529 addons.go:515] duration metric: took 1.644018031s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1120 21:08:39.420241  204529 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-023521" context rescaled to 1 replicas
	W1120 21:08:40.924590  204529 node_ready.go:57] node "old-k8s-version-023521" has "Ready":"False" status (will retry)
	W1120 21:08:43.422708  204529 node_ready.go:57] node "old-k8s-version-023521" has "Ready":"False" status (will retry)
	W1120 21:08:45.922193  204529 node_ready.go:57] node "old-k8s-version-023521" has "Ready":"False" status (will retry)
	W1120 21:08:48.422495  204529 node_ready.go:57] node "old-k8s-version-023521" has "Ready":"False" status (will retry)
	I1120 21:08:50.922229  204529 node_ready.go:49] node "old-k8s-version-023521" is "Ready"
	I1120 21:08:50.922257  204529 node_ready.go:38] duration metric: took 12.003468989s for node "old-k8s-version-023521" to be "Ready" ...
	I1120 21:08:50.922271  204529 api_server.go:52] waiting for apiserver process to appear ...
	I1120 21:08:50.922330  204529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:08:50.934728  204529 api_server.go:72] duration metric: took 13.463279551s to wait for apiserver process to appear ...
	I1120 21:08:50.934751  204529 api_server.go:88] waiting for apiserver healthz status ...
	I1120 21:08:50.934768  204529 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1120 21:08:50.943430  204529 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1120 21:08:50.944847  204529 api_server.go:141] control plane version: v1.28.0
	I1120 21:08:50.944872  204529 api_server.go:131] duration metric: took 10.114133ms to wait for apiserver health ...
	I1120 21:08:50.944881  204529 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:08:50.949020  204529 system_pods.go:59] 8 kube-system pods found
	I1120 21:08:50.949059  204529 system_pods.go:61] "coredns-5dd5756b68-wkdjm" [63ea4694-f3ea-4d95-8c6e-98a67aecaf2c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:08:50.949067  204529 system_pods.go:61] "etcd-old-k8s-version-023521" [db607596-46f9-4e1d-a770-b2cb9d3955bd] Running
	I1120 21:08:50.949073  204529 system_pods.go:61] "kindnet-n8fg9" [40f2fe0c-6d5d-4c96-bd2b-37d57c495e0a] Running
	I1120 21:08:50.949078  204529 system_pods.go:61] "kube-apiserver-old-k8s-version-023521" [dd90e3c4-9f74-414e-b99c-1090a4e22dea] Running
	I1120 21:08:50.949083  204529 system_pods.go:61] "kube-controller-manager-old-k8s-version-023521" [453d1d0a-b5fb-4a43-95b4-104fa035b2f6] Running
	I1120 21:08:50.949094  204529 system_pods.go:61] "kube-proxy-9zkv2" [33f7c1e7-cffe-4f30-ba0e-5e494a195fb4] Running
	I1120 21:08:50.949098  204529 system_pods.go:61] "kube-scheduler-old-k8s-version-023521" [d76be00a-70d5-45c3-8a1e-d1e95c187a1f] Running
	I1120 21:08:50.949112  204529 system_pods.go:61] "storage-provisioner" [b53fd876-cd07-4d74-9ca4-925ee07956a3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:08:50.949123  204529 system_pods.go:74] duration metric: took 4.23542ms to wait for pod list to return data ...
	I1120 21:08:50.949132  204529 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:08:50.951875  204529 default_sa.go:45] found service account: "default"
	I1120 21:08:50.951897  204529 default_sa.go:55] duration metric: took 2.75816ms for default service account to be created ...
	I1120 21:08:50.951907  204529 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 21:08:50.955607  204529 system_pods.go:86] 8 kube-system pods found
	I1120 21:08:50.955641  204529 system_pods.go:89] "coredns-5dd5756b68-wkdjm" [63ea4694-f3ea-4d95-8c6e-98a67aecaf2c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:08:50.955647  204529 system_pods.go:89] "etcd-old-k8s-version-023521" [db607596-46f9-4e1d-a770-b2cb9d3955bd] Running
	I1120 21:08:50.955653  204529 system_pods.go:89] "kindnet-n8fg9" [40f2fe0c-6d5d-4c96-bd2b-37d57c495e0a] Running
	I1120 21:08:50.955657  204529 system_pods.go:89] "kube-apiserver-old-k8s-version-023521" [dd90e3c4-9f74-414e-b99c-1090a4e22dea] Running
	I1120 21:08:50.955662  204529 system_pods.go:89] "kube-controller-manager-old-k8s-version-023521" [453d1d0a-b5fb-4a43-95b4-104fa035b2f6] Running
	I1120 21:08:50.955666  204529 system_pods.go:89] "kube-proxy-9zkv2" [33f7c1e7-cffe-4f30-ba0e-5e494a195fb4] Running
	I1120 21:08:50.955671  204529 system_pods.go:89] "kube-scheduler-old-k8s-version-023521" [d76be00a-70d5-45c3-8a1e-d1e95c187a1f] Running
	I1120 21:08:50.955676  204529 system_pods.go:89] "storage-provisioner" [b53fd876-cd07-4d74-9ca4-925ee07956a3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:08:50.955703  204529 retry.go:31] will retry after 281.893794ms: missing components: kube-dns
	I1120 21:08:51.248739  204529 system_pods.go:86] 8 kube-system pods found
	I1120 21:08:51.248788  204529 system_pods.go:89] "coredns-5dd5756b68-wkdjm" [63ea4694-f3ea-4d95-8c6e-98a67aecaf2c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:08:51.248796  204529 system_pods.go:89] "etcd-old-k8s-version-023521" [db607596-46f9-4e1d-a770-b2cb9d3955bd] Running
	I1120 21:08:51.248804  204529 system_pods.go:89] "kindnet-n8fg9" [40f2fe0c-6d5d-4c96-bd2b-37d57c495e0a] Running
	I1120 21:08:51.248816  204529 system_pods.go:89] "kube-apiserver-old-k8s-version-023521" [dd90e3c4-9f74-414e-b99c-1090a4e22dea] Running
	I1120 21:08:51.248821  204529 system_pods.go:89] "kube-controller-manager-old-k8s-version-023521" [453d1d0a-b5fb-4a43-95b4-104fa035b2f6] Running
	I1120 21:08:51.248825  204529 system_pods.go:89] "kube-proxy-9zkv2" [33f7c1e7-cffe-4f30-ba0e-5e494a195fb4] Running
	I1120 21:08:51.248831  204529 system_pods.go:89] "kube-scheduler-old-k8s-version-023521" [d76be00a-70d5-45c3-8a1e-d1e95c187a1f] Running
	I1120 21:08:51.248840  204529 system_pods.go:89] "storage-provisioner" [b53fd876-cd07-4d74-9ca4-925ee07956a3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:08:51.248858  204529 retry.go:31] will retry after 387.97064ms: missing components: kube-dns
	I1120 21:08:51.640865  204529 system_pods.go:86] 8 kube-system pods found
	I1120 21:08:51.640904  204529 system_pods.go:89] "coredns-5dd5756b68-wkdjm" [63ea4694-f3ea-4d95-8c6e-98a67aecaf2c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:08:51.640912  204529 system_pods.go:89] "etcd-old-k8s-version-023521" [db607596-46f9-4e1d-a770-b2cb9d3955bd] Running
	I1120 21:08:51.640917  204529 system_pods.go:89] "kindnet-n8fg9" [40f2fe0c-6d5d-4c96-bd2b-37d57c495e0a] Running
	I1120 21:08:51.640922  204529 system_pods.go:89] "kube-apiserver-old-k8s-version-023521" [dd90e3c4-9f74-414e-b99c-1090a4e22dea] Running
	I1120 21:08:51.640927  204529 system_pods.go:89] "kube-controller-manager-old-k8s-version-023521" [453d1d0a-b5fb-4a43-95b4-104fa035b2f6] Running
	I1120 21:08:51.640933  204529 system_pods.go:89] "kube-proxy-9zkv2" [33f7c1e7-cffe-4f30-ba0e-5e494a195fb4] Running
	I1120 21:08:51.640937  204529 system_pods.go:89] "kube-scheduler-old-k8s-version-023521" [d76be00a-70d5-45c3-8a1e-d1e95c187a1f] Running
	I1120 21:08:51.640941  204529 system_pods.go:89] "storage-provisioner" [b53fd876-cd07-4d74-9ca4-925ee07956a3] Running
	I1120 21:08:51.640949  204529 system_pods.go:126] duration metric: took 689.036121ms to wait for k8s-apps to be running ...
	I1120 21:08:51.640962  204529 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:08:51.641026  204529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:08:51.655371  204529 system_svc.go:56] duration metric: took 14.399252ms WaitForService to wait for kubelet
	I1120 21:08:51.655450  204529 kubeadm.go:587] duration metric: took 14.18400577s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:08:51.655483  204529 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:08:51.658633  204529 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:08:51.658667  204529 node_conditions.go:123] node cpu capacity is 2
	I1120 21:08:51.658683  204529 node_conditions.go:105] duration metric: took 3.161284ms to run NodePressure ...
	I1120 21:08:51.658696  204529 start.go:242] waiting for startup goroutines ...
	I1120 21:08:51.658732  204529 start.go:247] waiting for cluster config update ...
	I1120 21:08:51.658752  204529 start.go:256] writing updated cluster config ...
	I1120 21:08:51.659053  204529 ssh_runner.go:195] Run: rm -f paused
	I1120 21:08:51.663208  204529 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:08:51.667537  204529 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-wkdjm" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:08:52.673337  204529 pod_ready.go:94] pod "coredns-5dd5756b68-wkdjm" is "Ready"
	I1120 21:08:52.673366  204529 pod_ready.go:86] duration metric: took 1.005803212s for pod "coredns-5dd5756b68-wkdjm" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:08:52.676978  204529 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-023521" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:08:52.682317  204529 pod_ready.go:94] pod "etcd-old-k8s-version-023521" is "Ready"
	I1120 21:08:52.682402  204529 pod_ready.go:86] duration metric: took 5.398591ms for pod "etcd-old-k8s-version-023521" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:08:52.685821  204529 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-023521" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:08:52.691092  204529 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-023521" is "Ready"
	I1120 21:08:52.691132  204529 pod_ready.go:86] duration metric: took 5.286211ms for pod "kube-apiserver-old-k8s-version-023521" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:08:52.697169  204529 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-023521" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:08:52.871265  204529 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-023521" is "Ready"
	I1120 21:08:52.871290  204529 pod_ready.go:86] duration metric: took 174.088176ms for pod "kube-controller-manager-old-k8s-version-023521" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:08:53.072481  204529 pod_ready.go:83] waiting for pod "kube-proxy-9zkv2" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:08:53.471325  204529 pod_ready.go:94] pod "kube-proxy-9zkv2" is "Ready"
	I1120 21:08:53.471355  204529 pod_ready.go:86] duration metric: took 398.847773ms for pod "kube-proxy-9zkv2" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:08:53.672400  204529 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-023521" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:08:54.071117  204529 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-023521" is "Ready"
	I1120 21:08:54.071151  204529 pod_ready.go:86] duration metric: took 398.725211ms for pod "kube-scheduler-old-k8s-version-023521" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:08:54.071164  204529 pod_ready.go:40] duration metric: took 2.407922167s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:08:54.130898  204529 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1120 21:08:54.134333  204529 out.go:203] 
	W1120 21:08:54.137535  204529 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1120 21:08:54.140518  204529 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1120 21:08:54.144230  204529 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-023521" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	e04cc9fd5efbe       1611cd07b61d5       7 seconds ago       Running             busybox                   0                   e535b980cdcfb       busybox                                          default
	f8d3ed91c9212       97e04611ad434       13 seconds ago      Running             coredns                   0                   ed7c26a7b7ae3       coredns-5dd5756b68-wkdjm                         kube-system
	44e2db88a94d2       ba04bb24b9575       13 seconds ago      Running             storage-provisioner       0                   326766b782866       storage-provisioner                              kube-system
	1a68da234c92d       b1a8c6f707935       24 seconds ago      Running             kindnet-cni               0                   1c80c06b261cc       kindnet-n8fg9                                    kube-system
	2f225892ac178       940f54a5bcae9       26 seconds ago      Running             kube-proxy                0                   cc7515c0fb8e0       kube-proxy-9zkv2                                 kube-system
	ec6898b6bdca7       46cc66ccc7c19       47 seconds ago      Running             kube-controller-manager   0                   dd887a12be721       kube-controller-manager-old-k8s-version-023521   kube-system
	a1fecef7703c0       762dce4090c5f       47 seconds ago      Running             kube-scheduler            0                   48c921dde1e30       kube-scheduler-old-k8s-version-023521            kube-system
	926b6b2ac3f4e       9cdd6470f48c8       47 seconds ago      Running             etcd                      0                   5a4b5791116f7       etcd-old-k8s-version-023521                      kube-system
	3838205c7f6e1       00543d2fe5d71       47 seconds ago      Running             kube-apiserver            0                   aa65269d165e6       kube-apiserver-old-k8s-version-023521            kube-system
	
	
	==> containerd <==
	Nov 20 21:08:51 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:51.301655479Z" level=info msg="connecting to shim 44e2db88a94d2e237c818e4da098823b2a70fc76be486872d1c574b4027fbb32" address="unix:///run/containerd/s/c939db798c5f671f6494019542467d44e82202ef9fc87bbda2d5eb2cd698d7f2" protocol=ttrpc version=3
	Nov 20 21:08:51 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:51.305676035Z" level=info msg="CreateContainer within sandbox \"ed7c26a7b7ae3e95daeef87dddb800d013e623fb9e4adbea7be4fa59a0a4d06c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 20 21:08:51 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:51.323636028Z" level=info msg="Container f8d3ed91c92120ebc6cdee4a1cae90ecc19d703c73bfd991ef98f4f6382f468f: CDI devices from CRI Config.CDIDevices: []"
	Nov 20 21:08:51 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:51.337005954Z" level=info msg="CreateContainer within sandbox \"ed7c26a7b7ae3e95daeef87dddb800d013e623fb9e4adbea7be4fa59a0a4d06c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f8d3ed91c92120ebc6cdee4a1cae90ecc19d703c73bfd991ef98f4f6382f468f\""
	Nov 20 21:08:51 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:51.338769200Z" level=info msg="StartContainer for \"f8d3ed91c92120ebc6cdee4a1cae90ecc19d703c73bfd991ef98f4f6382f468f\""
	Nov 20 21:08:51 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:51.343016345Z" level=info msg="connecting to shim f8d3ed91c92120ebc6cdee4a1cae90ecc19d703c73bfd991ef98f4f6382f468f" address="unix:///run/containerd/s/99c95334820a3abcda097e97b15e7269c0808203ba77a99301c9c2df78f3c29d" protocol=ttrpc version=3
	Nov 20 21:08:51 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:51.397032106Z" level=info msg="StartContainer for \"44e2db88a94d2e237c818e4da098823b2a70fc76be486872d1c574b4027fbb32\" returns successfully"
	Nov 20 21:08:51 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:51.449601499Z" level=info msg="StartContainer for \"f8d3ed91c92120ebc6cdee4a1cae90ecc19d703c73bfd991ef98f4f6382f468f\" returns successfully"
	Nov 20 21:08:54 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:54.676260743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:9efbd2b5-b6e4-4170-a68d-a23aed850439,Namespace:default,Attempt:0,}"
	Nov 20 21:08:54 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:54.761305036Z" level=info msg="connecting to shim e535b980cdcfb69dcbad021109e901efbb7bfdd1e8e8e113951b4900cc45808f" address="unix:///run/containerd/s/ec9644d8d906904c60650aa1c4483207993c9aa6824d780964f0387e80568c7a" namespace=k8s.io protocol=ttrpc version=3
	Nov 20 21:08:54 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:54.824733553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:9efbd2b5-b6e4-4170-a68d-a23aed850439,Namespace:default,Attempt:0,} returns sandbox id \"e535b980cdcfb69dcbad021109e901efbb7bfdd1e8e8e113951b4900cc45808f\""
	Nov 20 21:08:54 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:54.830879840Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 20 21:08:56 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:56.970158963Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 21:08:56 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:56.972073826Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937184"
	Nov 20 21:08:56 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:56.974852393Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 21:08:56 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:56.977882614Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 21:08:56 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:56.978271429Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.147188345s"
	Nov 20 21:08:56 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:56.978314047Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 20 21:08:56 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:56.982421399Z" level=info msg="CreateContainer within sandbox \"e535b980cdcfb69dcbad021109e901efbb7bfdd1e8e8e113951b4900cc45808f\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 20 21:08:56 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:56.995877046Z" level=info msg="Container e04cc9fd5efbe35285fddb9302d2e927d66ed7cf7d89da5df9f56b8803aedcd2: CDI devices from CRI Config.CDIDevices: []"
	Nov 20 21:08:57 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:57.007782290Z" level=info msg="CreateContainer within sandbox \"e535b980cdcfb69dcbad021109e901efbb7bfdd1e8e8e113951b4900cc45808f\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"e04cc9fd5efbe35285fddb9302d2e927d66ed7cf7d89da5df9f56b8803aedcd2\""
	Nov 20 21:08:57 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:57.008589886Z" level=info msg="StartContainer for \"e04cc9fd5efbe35285fddb9302d2e927d66ed7cf7d89da5df9f56b8803aedcd2\""
	Nov 20 21:08:57 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:57.009691180Z" level=info msg="connecting to shim e04cc9fd5efbe35285fddb9302d2e927d66ed7cf7d89da5df9f56b8803aedcd2" address="unix:///run/containerd/s/ec9644d8d906904c60650aa1c4483207993c9aa6824d780964f0387e80568c7a" protocol=ttrpc version=3
	Nov 20 21:08:57 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:57.077826429Z" level=info msg="StartContainer for \"e04cc9fd5efbe35285fddb9302d2e927d66ed7cf7d89da5df9f56b8803aedcd2\" returns successfully"
	Nov 20 21:09:03 old-k8s-version-023521 containerd[756]: E1120 21:09:03.578422     756 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [f8d3ed91c92120ebc6cdee4a1cae90ecc19d703c73bfd991ef98f4f6382f468f] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55717 - 58159 "HINFO IN 2254010911663685545.5882765781244882875. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.039646393s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-023521
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-023521
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=old-k8s-version-023521
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_08_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:08:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-023521
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:08:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:08:55 +0000   Thu, 20 Nov 2025 21:08:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:08:55 +0000   Thu, 20 Nov 2025 21:08:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:08:55 +0000   Thu, 20 Nov 2025 21:08:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:08:55 +0000   Thu, 20 Nov 2025 21:08:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-023521
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                b55587fb-b894-4020-a3f5-9d7b089d08c4
	  Boot ID:                    0cc3a06a-788d-45d4-8fff-2131330a9ee0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-wkdjm                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     27s
	  kube-system                 etcd-old-k8s-version-023521                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         39s
	  kube-system                 kindnet-n8fg9                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-old-k8s-version-023521             250m (12%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-old-k8s-version-023521    200m (10%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-9zkv2                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-old-k8s-version-023521             100m (5%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  NodeHasSufficientMemory  48s (x8 over 48s)  kubelet          Node old-k8s-version-023521 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    48s (x8 over 48s)  kubelet          Node old-k8s-version-023521 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     48s (x7 over 48s)  kubelet          Node old-k8s-version-023521 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  48s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s                kubelet          Node old-k8s-version-023521 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s                kubelet          Node old-k8s-version-023521 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s                kubelet          Node old-k8s-version-023521 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  40s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           27s                node-controller  Node old-k8s-version-023521 event: Registered Node old-k8s-version-023521 in Controller
	  Normal  NodeReady                14s                kubelet          Node old-k8s-version-023521 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov20 20:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014399] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.498138] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.765613] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.782554] kauditd_printk_skb: 36 callbacks suppressed
	[Nov20 20:40] hrtimer: interrupt took 1888672 ns
	
	
	==> etcd [926b6b2ac3f4e183407145e23efc6c8775e0197125017db50628bd815b69e43a] <==
	{"level":"info","ts":"2025-11-20T21:08:17.357843Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-20T21:08:17.358058Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-20T21:08:17.369389Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-20T21:08:17.369414Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-20T21:08:17.369343Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-20T21:08:17.370256Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-20T21:08:17.370285Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-20T21:08:17.502541Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-20T21:08:17.50269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-20T21:08:17.502753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-11-20T21:08:17.502794Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-11-20T21:08:17.502859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-20T21:08:17.502912Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-11-20T21:08:17.502963Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-20T21:08:17.506643Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-023521 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-20T21:08:17.506862Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-20T21:08:17.507044Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-20T21:08:17.50815Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-20T21:08:17.506912Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-20T21:08:17.5088Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-20T21:08:17.509435Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-20T21:08:17.509646Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-20T21:08:17.510733Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-20T21:08:17.566546Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-20T21:08:17.566791Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 21:09:04 up 51 min,  0 user,  load average: 2.35, 3.21, 2.75
	Linux old-k8s-version-023521 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1a68da234c92d87fe620807404b7f86d355dbceb31d217f95048b507faeeb5fb] <==
	I1120 21:08:40.525122       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 21:08:40.525726       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1120 21:08:40.526025       1 main.go:148] setting mtu 1500 for CNI 
	I1120 21:08:40.526041       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 21:08:40.526178       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T21:08:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 21:08:40.727806       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 21:08:40.727913       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 21:08:40.727970       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 21:08:40.819453       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1120 21:08:41.028356       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 21:08:41.028387       1 metrics.go:72] Registering metrics
	I1120 21:08:41.028608       1 controller.go:711] "Syncing nftables rules"
	I1120 21:08:50.731130       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 21:08:50.731198       1 main.go:301] handling current node
	I1120 21:09:00.727385       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 21:09:00.727417       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3838205c7f6e1cc00448237ab227db4e52352ac7f000353d4eebf62e34dd087c] <==
	I1120 21:08:21.129294       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1120 21:08:21.129448       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1120 21:08:21.135173       1 controller.go:624] quota admission added evaluator for: namespaces
	I1120 21:08:21.150980       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:08:21.152974       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1120 21:08:21.153393       1 aggregator.go:166] initial CRD sync complete...
	I1120 21:08:21.153532       1 autoregister_controller.go:141] Starting autoregister controller
	I1120 21:08:21.153621       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 21:08:21.153716       1 cache.go:39] Caches are synced for autoregister controller
	I1120 21:08:21.180508       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1120 21:08:21.934529       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1120 21:08:21.942592       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1120 21:08:21.942625       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 21:08:22.596122       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 21:08:22.654509       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 21:08:22.757958       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1120 21:08:22.770312       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1120 21:08:22.771800       1 controller.go:624] quota admission added evaluator for: endpoints
	I1120 21:08:22.777081       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 21:08:22.982263       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1120 21:08:24.286242       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1120 21:08:24.304705       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1120 21:08:24.317230       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1120 21:08:37.187593       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1120 21:08:37.278082       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [ec6898b6bdca72f11c2c4d2ef9c3abb8773c568887034f35c57a3dc6c25aa058] <==
	I1120 21:08:37.249055       1 shared_informer.go:318] Caches are synced for resource quota
	I1120 21:08:37.252979       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-n8fg9"
	I1120 21:08:37.253301       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9zkv2"
	I1120 21:08:37.302559       1 shared_informer.go:318] Caches are synced for attach detach
	I1120 21:08:37.335913       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1120 21:08:37.444687       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-kxxv2"
	I1120 21:08:37.513981       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-wkdjm"
	I1120 21:08:37.571287       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="239.650414ms"
	I1120 21:08:37.588038       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.701352ms"
	I1120 21:08:37.629074       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="40.987424ms"
	I1120 21:08:37.629223       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.576µs"
	I1120 21:08:37.656124       1 shared_informer.go:318] Caches are synced for garbage collector
	I1120 21:08:37.656156       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1120 21:08:37.656483       1 shared_informer.go:318] Caches are synced for garbage collector
	I1120 21:08:39.010372       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1120 21:08:39.040385       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-kxxv2"
	I1120 21:08:39.055461       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="48.011478ms"
	I1120 21:08:39.071704       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.152043ms"
	I1120 21:08:39.071968       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="139.826µs"
	I1120 21:08:50.825109       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.158µs"
	I1120 21:08:50.857892       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="65.863µs"
	I1120 21:08:51.608706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.134µs"
	I1120 21:08:52.074909       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1120 21:08:52.618658       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.730237ms"
	I1120 21:08:52.619605       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.667µs"
	
	
	==> kube-proxy [2f225892ac17809d1001502c59aa33c3daa1a4fbf4e2366e33db57fbbd0826f8] <==
	I1120 21:08:38.495387       1 server_others.go:69] "Using iptables proxy"
	I1120 21:08:38.517177       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1120 21:08:38.573772       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:08:38.578240       1 server_others.go:152] "Using iptables Proxier"
	I1120 21:08:38.578279       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1120 21:08:38.578286       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1120 21:08:38.578319       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1120 21:08:38.578720       1 server.go:846] "Version info" version="v1.28.0"
	I1120 21:08:38.578732       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:08:38.579562       1 config.go:188] "Starting service config controller"
	I1120 21:08:38.579586       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1120 21:08:38.579656       1 config.go:97] "Starting endpoint slice config controller"
	I1120 21:08:38.579662       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1120 21:08:38.580189       1 config.go:315] "Starting node config controller"
	I1120 21:08:38.580197       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1120 21:08:38.680113       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1120 21:08:38.680161       1 shared_informer.go:318] Caches are synced for service config
	I1120 21:08:38.680387       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [a1fecef7703c09c58fed023c3691cc0d3341e668da394331a0b24fcff4824062] <==
	W1120 21:08:21.732737       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1120 21:08:21.732857       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1120 21:08:21.733020       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1120 21:08:21.733041       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1120 21:08:21.733231       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1120 21:08:21.733345       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1120 21:08:21.735700       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1120 21:08:21.735738       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1120 21:08:21.735893       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1120 21:08:21.735907       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1120 21:08:21.736197       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1120 21:08:21.736214       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1120 21:08:21.736266       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1120 21:08:21.736275       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1120 21:08:21.736604       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1120 21:08:21.736616       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1120 21:08:21.737055       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1120 21:08:21.737068       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1120 21:08:21.737136       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1120 21:08:21.737146       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1120 21:08:21.737204       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1120 21:08:21.737219       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1120 21:08:21.737363       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1120 21:08:21.737373       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1120 21:08:23.023618       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 20 21:08:37 old-k8s-version-023521 kubelet[1572]: I1120 21:08:37.199624    1572 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 20 21:08:37 old-k8s-version-023521 kubelet[1572]: I1120 21:08:37.201236    1572 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 20 21:08:37 old-k8s-version-023521 kubelet[1572]: I1120 21:08:37.257019    1572 topology_manager.go:215] "Topology Admit Handler" podUID="33f7c1e7-cffe-4f30-ba0e-5e494a195fb4" podNamespace="kube-system" podName="kube-proxy-9zkv2"
	Nov 20 21:08:37 old-k8s-version-023521 kubelet[1572]: I1120 21:08:37.275077    1572 topology_manager.go:215] "Topology Admit Handler" podUID="40f2fe0c-6d5d-4c96-bd2b-37d57c495e0a" podNamespace="kube-system" podName="kindnet-n8fg9"
	Nov 20 21:08:37 old-k8s-version-023521 kubelet[1572]: I1120 21:08:37.317077    1572 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/40f2fe0c-6d5d-4c96-bd2b-37d57c495e0a-cni-cfg\") pod \"kindnet-n8fg9\" (UID: \"40f2fe0c-6d5d-4c96-bd2b-37d57c495e0a\") " pod="kube-system/kindnet-n8fg9"
	Nov 20 21:08:37 old-k8s-version-023521 kubelet[1572]: I1120 21:08:37.317153    1572 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33f7c1e7-cffe-4f30-ba0e-5e494a195fb4-xtables-lock\") pod \"kube-proxy-9zkv2\" (UID: \"33f7c1e7-cffe-4f30-ba0e-5e494a195fb4\") " pod="kube-system/kube-proxy-9zkv2"
	Nov 20 21:08:37 old-k8s-version-023521 kubelet[1572]: I1120 21:08:37.317189    1572 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnv59\" (UniqueName: \"kubernetes.io/projected/33f7c1e7-cffe-4f30-ba0e-5e494a195fb4-kube-api-access-fnv59\") pod \"kube-proxy-9zkv2\" (UID: \"33f7c1e7-cffe-4f30-ba0e-5e494a195fb4\") " pod="kube-system/kube-proxy-9zkv2"
	Nov 20 21:08:37 old-k8s-version-023521 kubelet[1572]: I1120 21:08:37.317213    1572 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40f2fe0c-6d5d-4c96-bd2b-37d57c495e0a-lib-modules\") pod \"kindnet-n8fg9\" (UID: \"40f2fe0c-6d5d-4c96-bd2b-37d57c495e0a\") " pod="kube-system/kindnet-n8fg9"
	Nov 20 21:08:37 old-k8s-version-023521 kubelet[1572]: I1120 21:08:37.317243    1572 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfmkj\" (UniqueName: \"kubernetes.io/projected/40f2fe0c-6d5d-4c96-bd2b-37d57c495e0a-kube-api-access-cfmkj\") pod \"kindnet-n8fg9\" (UID: \"40f2fe0c-6d5d-4c96-bd2b-37d57c495e0a\") " pod="kube-system/kindnet-n8fg9"
	Nov 20 21:08:37 old-k8s-version-023521 kubelet[1572]: I1120 21:08:37.317282    1572 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/33f7c1e7-cffe-4f30-ba0e-5e494a195fb4-kube-proxy\") pod \"kube-proxy-9zkv2\" (UID: \"33f7c1e7-cffe-4f30-ba0e-5e494a195fb4\") " pod="kube-system/kube-proxy-9zkv2"
	Nov 20 21:08:37 old-k8s-version-023521 kubelet[1572]: I1120 21:08:37.317311    1572 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33f7c1e7-cffe-4f30-ba0e-5e494a195fb4-lib-modules\") pod \"kube-proxy-9zkv2\" (UID: \"33f7c1e7-cffe-4f30-ba0e-5e494a195fb4\") " pod="kube-system/kube-proxy-9zkv2"
	Nov 20 21:08:37 old-k8s-version-023521 kubelet[1572]: I1120 21:08:37.317337    1572 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40f2fe0c-6d5d-4c96-bd2b-37d57c495e0a-xtables-lock\") pod \"kindnet-n8fg9\" (UID: \"40f2fe0c-6d5d-4c96-bd2b-37d57c495e0a\") " pod="kube-system/kindnet-n8fg9"
	Nov 20 21:08:38 old-k8s-version-023521 kubelet[1572]: I1120 21:08:38.570841    1572 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9zkv2" podStartSLOduration=1.570767561 podCreationTimestamp="2025-11-20 21:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:08:38.567146331 +0000 UTC m=+14.325419869" watchObservedRunningTime="2025-11-20 21:08:38.570767561 +0000 UTC m=+14.329041107"
	Nov 20 21:08:44 old-k8s-version-023521 kubelet[1572]: I1120 21:08:44.460168    1572 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-n8fg9" podStartSLOduration=5.470600336 podCreationTimestamp="2025-11-20 21:08:37 +0000 UTC" firstStartedPulling="2025-11-20 21:08:38.180227122 +0000 UTC m=+13.938500659" lastFinishedPulling="2025-11-20 21:08:40.169752475 +0000 UTC m=+15.928026013" observedRunningTime="2025-11-20 21:08:40.578571284 +0000 UTC m=+16.336844822" watchObservedRunningTime="2025-11-20 21:08:44.46012569 +0000 UTC m=+20.218399228"
	Nov 20 21:08:50 old-k8s-version-023521 kubelet[1572]: I1120 21:08:50.783694    1572 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 20 21:08:50 old-k8s-version-023521 kubelet[1572]: I1120 21:08:50.815329    1572 topology_manager.go:215] "Topology Admit Handler" podUID="b53fd876-cd07-4d74-9ca4-925ee07956a3" podNamespace="kube-system" podName="storage-provisioner"
	Nov 20 21:08:50 old-k8s-version-023521 kubelet[1572]: I1120 21:08:50.824471    1572 topology_manager.go:215] "Topology Admit Handler" podUID="63ea4694-f3ea-4d95-8c6e-98a67aecaf2c" podNamespace="kube-system" podName="coredns-5dd5756b68-wkdjm"
	Nov 20 21:08:50 old-k8s-version-023521 kubelet[1572]: I1120 21:08:50.913292    1572 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63ea4694-f3ea-4d95-8c6e-98a67aecaf2c-config-volume\") pod \"coredns-5dd5756b68-wkdjm\" (UID: \"63ea4694-f3ea-4d95-8c6e-98a67aecaf2c\") " pod="kube-system/coredns-5dd5756b68-wkdjm"
	Nov 20 21:08:50 old-k8s-version-023521 kubelet[1572]: I1120 21:08:50.913366    1572 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mxf7\" (UniqueName: \"kubernetes.io/projected/b53fd876-cd07-4d74-9ca4-925ee07956a3-kube-api-access-9mxf7\") pod \"storage-provisioner\" (UID: \"b53fd876-cd07-4d74-9ca4-925ee07956a3\") " pod="kube-system/storage-provisioner"
	Nov 20 21:08:50 old-k8s-version-023521 kubelet[1572]: I1120 21:08:50.913404    1572 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b53fd876-cd07-4d74-9ca4-925ee07956a3-tmp\") pod \"storage-provisioner\" (UID: \"b53fd876-cd07-4d74-9ca4-925ee07956a3\") " pod="kube-system/storage-provisioner"
	Nov 20 21:08:50 old-k8s-version-023521 kubelet[1572]: I1120 21:08:50.913431    1572 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dwcm\" (UniqueName: \"kubernetes.io/projected/63ea4694-f3ea-4d95-8c6e-98a67aecaf2c-kube-api-access-4dwcm\") pod \"coredns-5dd5756b68-wkdjm\" (UID: \"63ea4694-f3ea-4d95-8c6e-98a67aecaf2c\") " pod="kube-system/coredns-5dd5756b68-wkdjm"
	Nov 20 21:08:51 old-k8s-version-023521 kubelet[1572]: I1120 21:08:51.630544    1572 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-wkdjm" podStartSLOduration=14.630419733 podCreationTimestamp="2025-11-20 21:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:08:51.609419762 +0000 UTC m=+27.367693300" watchObservedRunningTime="2025-11-20 21:08:51.630419733 +0000 UTC m=+27.388693287"
	Nov 20 21:08:52 old-k8s-version-023521 kubelet[1572]: I1120 21:08:52.603535    1572 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.603489155 podCreationTimestamp="2025-11-20 21:08:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:08:51.631536708 +0000 UTC m=+27.389810246" watchObservedRunningTime="2025-11-20 21:08:52.603489155 +0000 UTC m=+28.361762693"
	Nov 20 21:08:54 old-k8s-version-023521 kubelet[1572]: I1120 21:08:54.369743    1572 topology_manager.go:215] "Topology Admit Handler" podUID="9efbd2b5-b6e4-4170-a68d-a23aed850439" podNamespace="default" podName="busybox"
	Nov 20 21:08:54 old-k8s-version-023521 kubelet[1572]: I1120 21:08:54.444577    1572 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zgjv\" (UniqueName: \"kubernetes.io/projected/9efbd2b5-b6e4-4170-a68d-a23aed850439-kube-api-access-7zgjv\") pod \"busybox\" (UID: \"9efbd2b5-b6e4-4170-a68d-a23aed850439\") " pod="default/busybox"
	
	
	==> storage-provisioner [44e2db88a94d2e237c818e4da098823b2a70fc76be486872d1c574b4027fbb32] <==
	I1120 21:08:51.407613       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1120 21:08:51.441581       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 21:08:51.441717       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1120 21:08:51.468650       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 21:08:51.471037       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-023521_ce7a5da5-a297-466c-90b2-f74ac14dce09!
	I1120 21:08:51.474735       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"db4fb156-2957-4548-9aa8-7a3e0f9fb8ba", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-023521_ce7a5da5-a297-466c-90b2-f74ac14dce09 became leader
	I1120 21:08:51.572254       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-023521_ce7a5da5-a297-466c-90b2-f74ac14dce09!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-023521 -n old-k8s-version-023521
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-023521 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-023521
helpers_test.go:243: (dbg) docker inspect old-k8s-version-023521:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74636f055b722c8519558648b559d62370a6961b557856e2a138e817c51bed49",
	        "Created": "2025-11-20T21:07:56.631557256Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 204921,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:07:56.700927153Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/74636f055b722c8519558648b559d62370a6961b557856e2a138e817c51bed49/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74636f055b722c8519558648b559d62370a6961b557856e2a138e817c51bed49/hostname",
	        "HostsPath": "/var/lib/docker/containers/74636f055b722c8519558648b559d62370a6961b557856e2a138e817c51bed49/hosts",
	        "LogPath": "/var/lib/docker/containers/74636f055b722c8519558648b559d62370a6961b557856e2a138e817c51bed49/74636f055b722c8519558648b559d62370a6961b557856e2a138e817c51bed49-json.log",
	        "Name": "/old-k8s-version-023521",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-023521:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-023521",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "74636f055b722c8519558648b559d62370a6961b557856e2a138e817c51bed49",
	                "LowerDir": "/var/lib/docker/overlay2/1a6c29fcf2b65f0a5aba098c27207f076274729c865583cb75f14e3f5e8e9d13-init/diff:/var/lib/docker/overlay2/5105da773b59b243b777c3c083d206b6a741bd11ebc5a0283799917fe36ebbb2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1a6c29fcf2b65f0a5aba098c27207f076274729c865583cb75f14e3f5e8e9d13/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1a6c29fcf2b65f0a5aba098c27207f076274729c865583cb75f14e3f5e8e9d13/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1a6c29fcf2b65f0a5aba098c27207f076274729c865583cb75f14e3f5e8e9d13/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-023521",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-023521/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-023521",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-023521",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-023521",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "da77aa3a20347ff46dedc0a9ef78336dcc8e064623662f9b41628cae013296ea",
	            "SandboxKey": "/var/run/docker/netns/da77aa3a2034",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-023521": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:2d:84:c5:da:ec",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ca65c34920334c24d3f55f43079525c2100ac74d43715bed2327162f4af2415f",
	                    "EndpointID": "a6998ae74ea0e5ddb5cb609286800ef77944cf6d7685c33f3bf163f013650054",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-023521",
	                        "74636f055b72"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-023521 -n old-k8s-version-023521
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-023521 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-023521 logs -n 25: (1.219373125s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-448616 sudo docker system info                                                                                                                                                                                                            │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │                     │
	│ ssh     │ -p cilium-448616 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                           │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │                     │
	│ ssh     │ -p cilium-448616 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │                     │
	│ ssh     │ -p cilium-448616 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │                     │
	│ ssh     │ -p cilium-448616 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │                     │
	│ ssh     │ -p cilium-448616 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │                     │
	│ ssh     │ -p cilium-448616 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │                     │
	│ ssh     │ -p cilium-448616 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │                     │
	│ ssh     │ -p cilium-448616 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │                     │
	│ ssh     │ -p cilium-448616 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │                     │
	│ ssh     │ -p cilium-448616 sudo containerd config dump                                                                                                                                                                                                        │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │                     │
	│ ssh     │ -p cilium-448616 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │                     │
	│ ssh     │ -p cilium-448616 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │                     │
	│ ssh     │ -p cilium-448616 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-982573                                                                                                                                                                                                                        │ kubernetes-upgrade-982573 │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │ 20 Nov 25 21:06 UTC │
	│ delete  │ -p cilium-448616                                                                                                                                                                                                                                    │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │ 20 Nov 25 21:06 UTC │
	│ start   │ -p force-systemd-env-444240 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-444240  │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │ 20 Nov 25 21:07 UTC │
	│ start   │ -p cert-expiration-339813 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-339813    │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │ 20 Nov 25 21:07 UTC │
	│ ssh     │ force-systemd-env-444240 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-444240  │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ delete  │ -p force-systemd-env-444240                                                                                                                                                                                                                         │ force-systemd-env-444240  │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ start   │ -p cert-options-530158 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-530158       │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ ssh     │ cert-options-530158 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-530158       │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ ssh     │ -p cert-options-530158 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-530158       │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ delete  │ -p cert-options-530158                                                                                                                                                                                                                              │ cert-options-530158       │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ start   │ -p old-k8s-version-023521 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-023521    │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:08 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:07:50
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:07:50.345634  204529 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:07:50.345833  204529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:07:50.345859  204529 out.go:374] Setting ErrFile to fd 2...
	I1120 21:07:50.345881  204529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:07:50.346371  204529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-2300/.minikube/bin
	I1120 21:07:50.347217  204529 out.go:368] Setting JSON to false
	I1120 21:07:50.348406  204529 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3020,"bootTime":1763669851,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1120 21:07:50.348524  204529 start.go:143] virtualization:  
	I1120 21:07:50.352212  204529 out.go:179] * [old-k8s-version-023521] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 21:07:50.356524  204529 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:07:50.356581  204529 notify.go:221] Checking for updates...
	I1120 21:07:50.363361  204529 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:07:50.366491  204529 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-2300/kubeconfig
	I1120 21:07:50.369561  204529 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-2300/.minikube
	I1120 21:07:50.373535  204529 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 21:07:50.376604  204529 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:07:50.380233  204529 config.go:182] Loaded profile config "cert-expiration-339813": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 21:07:50.380345  204529 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:07:50.413179  204529 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 21:07:50.413309  204529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:07:50.474237  204529 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 21:07:50.464908521 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:07:50.474343  204529 docker.go:319] overlay module found
	I1120 21:07:50.479656  204529 out.go:179] * Using the docker driver based on user configuration
	I1120 21:07:50.482678  204529 start.go:309] selected driver: docker
	I1120 21:07:50.482701  204529 start.go:930] validating driver "docker" against <nil>
	I1120 21:07:50.482716  204529 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:07:50.483457  204529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:07:50.571864  204529 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 21:07:50.55798512 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:07:50.572017  204529 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 21:07:50.572283  204529 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:07:50.575452  204529 out.go:179] * Using Docker driver with root privileges
	I1120 21:07:50.578296  204529 cni.go:84] Creating CNI manager for ""
	I1120 21:07:50.578368  204529 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 21:07:50.578383  204529 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1120 21:07:50.578588  204529 start.go:353] cluster config:
	{Name:old-k8s-version-023521 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-023521 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:07:50.581769  204529 out.go:179] * Starting "old-k8s-version-023521" primary control-plane node in "old-k8s-version-023521" cluster
	I1120 21:07:50.584601  204529 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1120 21:07:50.587646  204529 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:07:50.590630  204529 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1120 21:07:50.590686  204529 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-2300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1120 21:07:50.590697  204529 cache.go:65] Caching tarball of preloaded images
	I1120 21:07:50.590819  204529 preload.go:238] Found /home/jenkins/minikube-integration/21923-2300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1120 21:07:50.590835  204529 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1120 21:07:50.590975  204529 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/config.json ...
	I1120 21:07:50.591006  204529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/config.json: {Name:mkc1e1e459ad5ad023bc0c29174f23ee97f50186 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:07:50.591183  204529 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:07:50.612665  204529 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:07:50.612689  204529 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:07:50.612707  204529 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:07:50.612731  204529 start.go:360] acquireMachinesLock for old-k8s-version-023521: {Name:mkc267f5cb7af210c91e5bd6be69f432227b9fc9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:07:50.612836  204529 start.go:364] duration metric: took 86.361µs to acquireMachinesLock for "old-k8s-version-023521"
	I1120 21:07:50.612868  204529 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-023521 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-023521 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1120 21:07:50.612944  204529 start.go:125] createHost starting for "" (driver="docker")
	I1120 21:07:50.618216  204529 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1120 21:07:50.618473  204529 start.go:159] libmachine.API.Create for "old-k8s-version-023521" (driver="docker")
	I1120 21:07:50.618516  204529 client.go:173] LocalClient.Create starting
	I1120 21:07:50.618589  204529 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem
	I1120 21:07:50.618629  204529 main.go:143] libmachine: Decoding PEM data...
	I1120 21:07:50.618655  204529 main.go:143] libmachine: Parsing certificate...
	I1120 21:07:50.618711  204529 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-2300/.minikube/certs/cert.pem
	I1120 21:07:50.618734  204529 main.go:143] libmachine: Decoding PEM data...
	I1120 21:07:50.618759  204529 main.go:143] libmachine: Parsing certificate...
	I1120 21:07:50.619133  204529 cli_runner.go:164] Run: docker network inspect old-k8s-version-023521 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1120 21:07:50.634889  204529 cli_runner.go:211] docker network inspect old-k8s-version-023521 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1120 21:07:50.634980  204529 network_create.go:284] running [docker network inspect old-k8s-version-023521] to gather additional debugging logs...
	I1120 21:07:50.635051  204529 cli_runner.go:164] Run: docker network inspect old-k8s-version-023521
	W1120 21:07:50.651755  204529 cli_runner.go:211] docker network inspect old-k8s-version-023521 returned with exit code 1
	I1120 21:07:50.651786  204529 network_create.go:287] error running [docker network inspect old-k8s-version-023521]: docker network inspect old-k8s-version-023521: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-023521 not found
	I1120 21:07:50.651817  204529 network_create.go:289] output of [docker network inspect old-k8s-version-023521]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-023521 not found
	
	** /stderr **
	I1120 21:07:50.651919  204529 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:07:50.669365  204529 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8f2399b7fac6 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:ce:e1:0f:d8:b1} reservation:<nil>}
	I1120 21:07:50.669711  204529 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-954bfb8e5d57 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:f3:60:ee:cc:b7} reservation:<nil>}
	I1120 21:07:50.670049  204529 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-02e4726a397e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:f0:04:c7:8f:fa} reservation:<nil>}
	I1120 21:07:50.670319  204529 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4845adc70ff8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:82:55:55:33:b2:ff} reservation:<nil>}
	I1120 21:07:50.670866  204529 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a20740}
	I1120 21:07:50.670900  204529 network_create.go:124] attempt to create docker network old-k8s-version-023521 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1120 21:07:50.670956  204529 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-023521 old-k8s-version-023521
	I1120 21:07:50.746550  204529 network_create.go:108] docker network old-k8s-version-023521 192.168.85.0/24 created
	I1120 21:07:50.746584  204529 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-023521" container
	I1120 21:07:50.746655  204529 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1120 21:07:50.765587  204529 cli_runner.go:164] Run: docker volume create old-k8s-version-023521 --label name.minikube.sigs.k8s.io=old-k8s-version-023521 --label created_by.minikube.sigs.k8s.io=true
	I1120 21:07:50.784152  204529 oci.go:103] Successfully created a docker volume old-k8s-version-023521
	I1120 21:07:50.784238  204529 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-023521-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-023521 --entrypoint /usr/bin/test -v old-k8s-version-023521:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1120 21:07:51.391328  204529 oci.go:107] Successfully prepared a docker volume old-k8s-version-023521
	I1120 21:07:51.391403  204529 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1120 21:07:51.391415  204529 kic.go:194] Starting extracting preloaded images to volume ...
	I1120 21:07:51.391482  204529 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-2300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-023521:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1120 21:07:56.551620  204529 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-2300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-023521:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (5.160099028s)
	I1120 21:07:56.551655  204529 kic.go:203] duration metric: took 5.160234866s to extract preloaded images to volume ...
	W1120 21:07:56.551803  204529 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1120 21:07:56.551933  204529 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1120 21:07:56.616896  204529 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-023521 --name old-k8s-version-023521 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-023521 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-023521 --network old-k8s-version-023521 --ip 192.168.85.2 --volume old-k8s-version-023521:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1120 21:07:56.933091  204529 cli_runner.go:164] Run: docker container inspect old-k8s-version-023521 --format={{.State.Running}}
	I1120 21:07:56.957220  204529 cli_runner.go:164] Run: docker container inspect old-k8s-version-023521 --format={{.State.Status}}
	I1120 21:07:56.976667  204529 cli_runner.go:164] Run: docker exec old-k8s-version-023521 stat /var/lib/dpkg/alternatives/iptables
	I1120 21:07:57.030912  204529 oci.go:144] the created container "old-k8s-version-023521" has a running status.
	I1120 21:07:57.030964  204529 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21923-2300/.minikube/machines/old-k8s-version-023521/id_rsa...
	I1120 21:07:57.153076  204529 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21923-2300/.minikube/machines/old-k8s-version-023521/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1120 21:07:57.178236  204529 cli_runner.go:164] Run: docker container inspect old-k8s-version-023521 --format={{.State.Status}}
	I1120 21:07:57.197178  204529 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1120 21:07:57.197201  204529 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-023521 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1120 21:07:57.252554  204529 cli_runner.go:164] Run: docker container inspect old-k8s-version-023521 --format={{.State.Status}}
	I1120 21:07:57.280913  204529 machine.go:94] provisionDockerMachine start ...
	I1120 21:07:57.281000  204529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023521
	I1120 21:07:57.304546  204529 main.go:143] libmachine: Using SSH client type: native
	I1120 21:07:57.305176  204529 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1120 21:07:57.305193  204529 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:07:57.306065  204529 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 21:08:00.667758  204529 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-023521
	
	I1120 21:08:00.667786  204529 ubuntu.go:182] provisioning hostname "old-k8s-version-023521"
	I1120 21:08:00.667859  204529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023521
	I1120 21:08:00.695623  204529 main.go:143] libmachine: Using SSH client type: native
	I1120 21:08:00.696023  204529 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1120 21:08:00.696046  204529 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-023521 && echo "old-k8s-version-023521" | sudo tee /etc/hostname
	I1120 21:08:00.873397  204529 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-023521
	
	I1120 21:08:00.873491  204529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023521
	I1120 21:08:00.892468  204529 main.go:143] libmachine: Using SSH client type: native
	I1120 21:08:00.892832  204529 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1120 21:08:00.892857  204529 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-023521' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-023521/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-023521' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:08:01.034881  204529 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:08:01.034919  204529 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-2300/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-2300/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-2300/.minikube}
	I1120 21:08:01.034942  204529 ubuntu.go:190] setting up certificates
	I1120 21:08:01.034951  204529 provision.go:84] configureAuth start
	I1120 21:08:01.035023  204529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-023521
	I1120 21:08:01.053079  204529 provision.go:143] copyHostCerts
	I1120 21:08:01.053145  204529 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-2300/.minikube/ca.pem, removing ...
	I1120 21:08:01.053154  204529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-2300/.minikube/ca.pem
	I1120 21:08:01.053233  204529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-2300/.minikube/ca.pem (1078 bytes)
	I1120 21:08:01.053331  204529 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-2300/.minikube/cert.pem, removing ...
	I1120 21:08:01.053336  204529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-2300/.minikube/cert.pem
	I1120 21:08:01.053363  204529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-2300/.minikube/cert.pem (1123 bytes)
	I1120 21:08:01.053419  204529 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-2300/.minikube/key.pem, removing ...
	I1120 21:08:01.053423  204529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-2300/.minikube/key.pem
	I1120 21:08:01.053446  204529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-2300/.minikube/key.pem (1675 bytes)
	I1120 21:08:01.053492  204529 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-2300/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-023521 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-023521]
	I1120 21:08:01.879091  204529 provision.go:177] copyRemoteCerts
	I1120 21:08:01.879157  204529 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:08:01.879196  204529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023521
	I1120 21:08:01.895728  204529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/old-k8s-version-023521/id_rsa Username:docker}
	I1120 21:08:01.998378  204529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 21:08:02.020748  204529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:08:02.041845  204529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1120 21:08:02.061523  204529 provision.go:87] duration metric: took 1.026540108s to configureAuth
	I1120 21:08:02.061552  204529 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:08:02.061744  204529 config.go:182] Loaded profile config "old-k8s-version-023521": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1120 21:08:02.061758  204529 machine.go:97] duration metric: took 4.780828142s to provisionDockerMachine
	I1120 21:08:02.061766  204529 client.go:176] duration metric: took 11.44324076s to LocalClient.Create
	I1120 21:08:02.061780  204529 start.go:167] duration metric: took 11.443308248s to libmachine.API.Create "old-k8s-version-023521"
	I1120 21:08:02.061792  204529 start.go:293] postStartSetup for "old-k8s-version-023521" (driver="docker")
	I1120 21:08:02.061801  204529 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:08:02.061865  204529 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:08:02.061908  204529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023521
	I1120 21:08:02.079989  204529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/old-k8s-version-023521/id_rsa Username:docker}
	I1120 21:08:02.182770  204529 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:08:02.186031  204529 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:08:02.186060  204529 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:08:02.186075  204529 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-2300/.minikube/addons for local assets ...
	I1120 21:08:02.186130  204529 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-2300/.minikube/files for local assets ...
	I1120 21:08:02.186239  204529 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-2300/.minikube/files/etc/ssl/certs/40892.pem -> 40892.pem in /etc/ssl/certs
	I1120 21:08:02.186360  204529 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:08:02.194423  204529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/files/etc/ssl/certs/40892.pem --> /etc/ssl/certs/40892.pem (1708 bytes)
	I1120 21:08:02.212897  204529 start.go:296] duration metric: took 151.089798ms for postStartSetup
	I1120 21:08:02.213266  204529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-023521
	I1120 21:08:02.230542  204529 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/config.json ...
	I1120 21:08:02.230833  204529 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:08:02.230895  204529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023521
	I1120 21:08:02.248108  204529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/old-k8s-version-023521/id_rsa Username:docker}
	I1120 21:08:02.351867  204529 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:08:02.356717  204529 start.go:128] duration metric: took 11.743757855s to createHost
	I1120 21:08:02.356741  204529 start.go:83] releasing machines lock for "old-k8s-version-023521", held for 11.743890657s
	I1120 21:08:02.356816  204529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-023521
	I1120 21:08:02.373836  204529 ssh_runner.go:195] Run: cat /version.json
	I1120 21:08:02.373879  204529 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:08:02.373889  204529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023521
	I1120 21:08:02.373947  204529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023521
	I1120 21:08:02.398646  204529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/old-k8s-version-023521/id_rsa Username:docker}
	I1120 21:08:02.406971  204529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/old-k8s-version-023521/id_rsa Username:docker}
	I1120 21:08:02.614261  204529 ssh_runner.go:195] Run: systemctl --version
	I1120 21:08:02.620650  204529 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:08:02.624853  204529 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:08:02.624975  204529 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:08:02.654155  204529 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1120 21:08:02.654183  204529 start.go:496] detecting cgroup driver to use...
	I1120 21:08:02.654225  204529 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:08:02.654285  204529 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1120 21:08:02.672603  204529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1120 21:08:02.685852  204529 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:08:02.685924  204529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:08:02.704782  204529 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:08:02.724442  204529 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:08:02.849826  204529 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:08:02.975618  204529 docker.go:234] disabling docker service ...
	I1120 21:08:02.975699  204529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:08:03.003115  204529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:08:03.022268  204529 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:08:03.149241  204529 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:08:03.262151  204529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:08:03.275711  204529 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:08:03.290427  204529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1120 21:08:03.300720  204529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1120 21:08:03.311036  204529 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1120 21:08:03.311112  204529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1120 21:08:03.320809  204529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1120 21:08:03.330581  204529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1120 21:08:03.340504  204529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1120 21:08:03.350367  204529 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:08:03.359273  204529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1120 21:08:03.369391  204529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1120 21:08:03.379353  204529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1120 21:08:03.389353  204529 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:08:03.397532  204529 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:08:03.405309  204529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:08:03.523858  204529 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1120 21:08:03.641572  204529 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1120 21:08:03.641711  204529 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1120 21:08:03.647078  204529 start.go:564] Will wait 60s for crictl version
	I1120 21:08:03.647201  204529 ssh_runner.go:195] Run: which crictl
	I1120 21:08:03.650913  204529 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:08:03.677062  204529 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1120 21:08:03.677200  204529 ssh_runner.go:195] Run: containerd --version
	I1120 21:08:03.698823  204529 ssh_runner.go:195] Run: containerd --version
	I1120 21:08:03.728272  204529 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1120 21:08:03.731279  204529 cli_runner.go:164] Run: docker network inspect old-k8s-version-023521 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:08:03.752768  204529 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1120 21:08:03.756981  204529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:08:03.768542  204529 kubeadm.go:884] updating cluster {Name:old-k8s-version-023521 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-023521 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:08:03.768664  204529 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1120 21:08:03.768725  204529 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:08:03.798631  204529 containerd.go:627] all images are preloaded for containerd runtime.
	I1120 21:08:03.798657  204529 containerd.go:534] Images already preloaded, skipping extraction
	I1120 21:08:03.798715  204529 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:08:03.823336  204529 containerd.go:627] all images are preloaded for containerd runtime.
	I1120 21:08:03.823361  204529 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:08:03.823368  204529 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 containerd true true} ...
	I1120 21:08:03.823464  204529 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-023521 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-023521 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:08:03.823529  204529 ssh_runner.go:195] Run: sudo crictl info
	I1120 21:08:03.853097  204529 cni.go:84] Creating CNI manager for ""
	I1120 21:08:03.853123  204529 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 21:08:03.853140  204529 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 21:08:03.853165  204529 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-023521 NodeName:old-k8s-version-023521 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:08:03.853295  204529 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-023521"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:08:03.853372  204529 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1120 21:08:03.862188  204529 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:08:03.862269  204529 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 21:08:03.870636  204529 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1120 21:08:03.884779  204529 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:08:03.897983  204529 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I1120 21:08:03.911141  204529 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1120 21:08:03.914894  204529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:08:03.924473  204529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:08:04.047274  204529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:08:04.064190  204529 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521 for IP: 192.168.85.2
	I1120 21:08:04.064214  204529 certs.go:195] generating shared ca certs ...
	I1120 21:08:04.064231  204529 certs.go:227] acquiring lock for ca certs: {Name:mke329f4cdcc6bfc142b6fc6817600b3d33b3062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:08:04.064462  204529 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-2300/.minikube/ca.key
	I1120 21:08:04.064538  204529 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-2300/.minikube/proxy-client-ca.key
	I1120 21:08:04.064552  204529 certs.go:257] generating profile certs ...
	I1120 21:08:04.064630  204529 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/client.key
	I1120 21:08:04.064647  204529 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/client.crt with IP's: []
	I1120 21:08:04.910465  204529 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/client.crt ...
	I1120 21:08:04.910494  204529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/client.crt: {Name:mk91938e9ba5fb02364a12aaf04b0ffb15ea019d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:08:04.910728  204529 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/client.key ...
	I1120 21:08:04.910747  204529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/client.key: {Name:mk1f1da68502c1749c7086e6b0698c1b1aa7f221 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:08:04.910877  204529 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/apiserver.key.7434d226
	I1120 21:08:04.910897  204529 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/apiserver.crt.7434d226 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1120 21:08:05.468326  204529 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/apiserver.crt.7434d226 ...
	I1120 21:08:05.468357  204529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/apiserver.crt.7434d226: {Name:mk8974d363dc793e36fff94558f68e72867f5c2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:08:05.468591  204529 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/apiserver.key.7434d226 ...
	I1120 21:08:05.468612  204529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/apiserver.key.7434d226: {Name:mk0cc6a078e63d977d1ac01112d497bfd84610fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:08:05.468698  204529 certs.go:382] copying /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/apiserver.crt.7434d226 -> /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/apiserver.crt
	I1120 21:08:05.468781  204529 certs.go:386] copying /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/apiserver.key.7434d226 -> /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/apiserver.key
	I1120 21:08:05.468847  204529 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/proxy-client.key
	I1120 21:08:05.468866  204529 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/proxy-client.crt with IP's: []
	I1120 21:08:05.977539  204529 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/proxy-client.crt ...
	I1120 21:08:05.977573  204529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/proxy-client.crt: {Name:mk0aedb75b239b869f74169d43558f34de042867 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:08:05.977757  204529 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/proxy-client.key ...
	I1120 21:08:05.977771  204529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/proxy-client.key: {Name:mk78805164a04040818958ecee14d66a101c45ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:08:05.977971  204529 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/4089.pem (1338 bytes)
	W1120 21:08:05.978014  204529 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-2300/.minikube/certs/4089_empty.pem, impossibly tiny 0 bytes
	I1120 21:08:05.978023  204529 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:08:05.978049  204529 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:08:05.978079  204529 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:08:05.978105  204529 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/key.pem (1675 bytes)
	I1120 21:08:05.978151  204529 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/files/etc/ssl/certs/40892.pem (1708 bytes)
	I1120 21:08:05.978737  204529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:08:06.001395  204529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:08:06.027046  204529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:08:06.048561  204529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 21:08:06.068034  204529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1120 21:08:06.086611  204529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 21:08:06.105155  204529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:08:06.122769  204529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:08:06.140412  204529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/certs/4089.pem --> /usr/share/ca-certificates/4089.pem (1338 bytes)
	I1120 21:08:06.158599  204529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/files/etc/ssl/certs/40892.pem --> /usr/share/ca-certificates/40892.pem (1708 bytes)
	I1120 21:08:06.177626  204529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:08:06.197173  204529 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:08:06.210213  204529 ssh_runner.go:195] Run: openssl version
	I1120 21:08:06.216414  204529 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4089.pem
	I1120 21:08:06.223563  204529 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4089.pem /etc/ssl/certs/4089.pem
	I1120 21:08:06.231172  204529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4089.pem
	I1120 21:08:06.235004  204529 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:28 /usr/share/ca-certificates/4089.pem
	I1120 21:08:06.235068  204529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4089.pem
	I1120 21:08:06.277528  204529 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:08:06.292885  204529 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4089.pem /etc/ssl/certs/51391683.0
	I1120 21:08:06.303578  204529 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/40892.pem
	I1120 21:08:06.312363  204529 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/40892.pem /etc/ssl/certs/40892.pem
	I1120 21:08:06.320702  204529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/40892.pem
	I1120 21:08:06.325945  204529 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:28 /usr/share/ca-certificates/40892.pem
	I1120 21:08:06.326092  204529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/40892.pem
	I1120 21:08:06.367814  204529 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:08:06.375428  204529 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/40892.pem /etc/ssl/certs/3ec20f2e.0
	I1120 21:08:06.383039  204529 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:08:06.391106  204529 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:08:06.398582  204529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:08:06.402146  204529 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:21 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:08:06.402211  204529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:08:06.443508  204529 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:08:06.450885  204529 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 21:08:06.459499  204529 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:08:06.464090  204529 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 21:08:06.464152  204529 kubeadm.go:401] StartCluster: {Name:old-k8s-version-023521 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-023521 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:08:06.464225  204529 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1120 21:08:06.464287  204529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:08:06.493270  204529 cri.go:89] found id: ""
	I1120 21:08:06.493345  204529 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:08:06.501574  204529 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 21:08:06.509345  204529 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1120 21:08:06.509461  204529 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 21:08:06.517463  204529 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 21:08:06.517484  204529 kubeadm.go:158] found existing configuration files:
	
	I1120 21:08:06.517536  204529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1120 21:08:06.526094  204529 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 21:08:06.526161  204529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 21:08:06.534077  204529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1120 21:08:06.542241  204529 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 21:08:06.542322  204529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 21:08:06.549861  204529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1120 21:08:06.557672  204529 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 21:08:06.557736  204529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 21:08:06.565472  204529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1120 21:08:06.573804  204529 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 21:08:06.573869  204529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 21:08:06.581411  204529 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1120 21:08:06.630756  204529 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1120 21:08:06.631017  204529 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 21:08:06.667817  204529 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1120 21:08:06.667894  204529 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1120 21:08:06.667938  204529 kubeadm.go:319] OS: Linux
	I1120 21:08:06.667991  204529 kubeadm.go:319] CGROUPS_CPU: enabled
	I1120 21:08:06.668051  204529 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1120 21:08:06.668103  204529 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1120 21:08:06.668157  204529 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1120 21:08:06.668220  204529 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1120 21:08:06.668275  204529 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1120 21:08:06.668326  204529 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1120 21:08:06.668381  204529 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1120 21:08:06.668433  204529 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1120 21:08:06.771301  204529 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 21:08:06.771420  204529 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 21:08:06.771522  204529 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1120 21:08:06.931381  204529 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 21:08:06.936972  204529 out.go:252]   - Generating certificates and keys ...
	I1120 21:08:06.937120  204529 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 21:08:06.937207  204529 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 21:08:08.348613  204529 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 21:08:08.723144  204529 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 21:08:09.175139  204529 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 21:08:09.914815  204529 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 21:08:10.989838  204529 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 21:08:10.990523  204529 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-023521] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1120 21:08:11.491090  204529 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 21:08:11.491471  204529 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-023521] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1120 21:08:11.723783  204529 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 21:08:12.537901  204529 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 21:08:12.685279  204529 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 21:08:12.685497  204529 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 21:08:13.304430  204529 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 21:08:13.713286  204529 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 21:08:14.961129  204529 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 21:08:15.207439  204529 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 21:08:15.208397  204529 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 21:08:15.211425  204529 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1120 21:08:15.217259  204529 out.go:252]   - Booting up control plane ...
	I1120 21:08:15.217381  204529 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1120 21:08:15.217474  204529 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1120 21:08:15.217552  204529 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1120 21:08:15.245709  204529 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1120 21:08:15.247208  204529 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1120 21:08:15.247264  204529 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1120 21:08:15.394081  204529 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1120 21:08:22.896894  204529 kubeadm.go:319] [apiclient] All control plane components are healthy after 7.504915 seconds
	I1120 21:08:22.897034  204529 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 21:08:22.914664  204529 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 21:08:23.441125  204529 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 21:08:23.441344  204529 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-023521 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 21:08:23.953562  204529 kubeadm.go:319] [bootstrap-token] Using token: skflu2.9u6vb06ud6qxurxj
	I1120 21:08:23.956553  204529 out.go:252]   - Configuring RBAC rules ...
	I1120 21:08:23.956683  204529 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 21:08:23.961810  204529 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 21:08:23.971097  204529 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 21:08:23.976160  204529 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 21:08:23.980441  204529 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 21:08:23.987112  204529 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 21:08:24.004881  204529 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 21:08:24.306248  204529 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 21:08:24.392874  204529 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 21:08:24.396583  204529 kubeadm.go:319] 
	I1120 21:08:24.396672  204529 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 21:08:24.396680  204529 kubeadm.go:319] 
	I1120 21:08:24.396760  204529 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 21:08:24.396765  204529 kubeadm.go:319] 
	I1120 21:08:24.396797  204529 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 21:08:24.397414  204529 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 21:08:24.397493  204529 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 21:08:24.397499  204529 kubeadm.go:319] 
	I1120 21:08:24.397556  204529 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 21:08:24.397561  204529 kubeadm.go:319] 
	I1120 21:08:24.397616  204529 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 21:08:24.397621  204529 kubeadm.go:319] 
	I1120 21:08:24.397676  204529 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 21:08:24.397754  204529 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 21:08:24.397825  204529 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 21:08:24.397829  204529 kubeadm.go:319] 
	I1120 21:08:24.398173  204529 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 21:08:24.398262  204529 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 21:08:24.398267  204529 kubeadm.go:319] 
	I1120 21:08:24.398664  204529 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token skflu2.9u6vb06ud6qxurxj \
	I1120 21:08:24.398787  204529 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:0ba1c2e2176682abc636887486248c0692df9e5785bd62c1c27f34717ff2c43f \
	I1120 21:08:24.399372  204529 kubeadm.go:319] 	--control-plane 
	I1120 21:08:24.399384  204529 kubeadm.go:319] 
	I1120 21:08:24.399831  204529 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 21:08:24.399843  204529 kubeadm.go:319] 
	I1120 21:08:24.400222  204529 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token skflu2.9u6vb06ud6qxurxj \
	I1120 21:08:24.400587  204529 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:0ba1c2e2176682abc636887486248c0692df9e5785bd62c1c27f34717ff2c43f 
	I1120 21:08:24.406196  204529 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1120 21:08:24.406335  204529 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1120 21:08:24.406355  204529 cni.go:84] Creating CNI manager for ""
	I1120 21:08:24.406363  204529 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 21:08:24.409557  204529 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1120 21:08:24.412381  204529 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1120 21:08:24.416824  204529 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1120 21:08:24.416844  204529 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1120 21:08:24.449600  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1120 21:08:25.504866  204529 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.055190491s)
	I1120 21:08:25.504925  204529 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 21:08:25.505051  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:25.505126  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-023521 minikube.k8s.io/updated_at=2025_11_20T21_08_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=old-k8s-version-023521 minikube.k8s.io/primary=true
	I1120 21:08:25.548296  204529 ops.go:34] apiserver oom_adj: -16
	I1120 21:08:25.760504  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:26.260647  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:26.761517  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:27.261434  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:27.760627  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:28.261065  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:28.760637  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:29.261590  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:29.761521  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:30.260602  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:30.760869  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:31.260595  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:31.761004  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:32.261415  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:32.760660  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:33.261373  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:33.760638  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:34.260690  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:34.761337  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:35.261608  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:35.761183  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:36.261151  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:36.761439  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:37.261027  204529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:08:37.469787  204529 kubeadm.go:1114] duration metric: took 11.964777407s to wait for elevateKubeSystemPrivileges
	I1120 21:08:37.469831  204529 kubeadm.go:403] duration metric: took 31.005689899s to StartCluster
	I1120 21:08:37.469848  204529 settings.go:142] acquiring lock: {Name:mk8f1e96fadc1ef170d5eddc49033a884865c024 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:08:37.469922  204529 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-2300/kubeconfig
	I1120 21:08:37.471143  204529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/kubeconfig: {Name:mk7ea52a23a4d9fc2da4c68a59479b947db5281c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:08:37.471421  204529 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1120 21:08:37.471626  204529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 21:08:37.471976  204529 config.go:182] Loaded profile config "old-k8s-version-023521": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1120 21:08:37.472048  204529 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 21:08:37.472610  204529 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-023521"
	I1120 21:08:37.472647  204529 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-023521"
	I1120 21:08:37.472700  204529 host.go:66] Checking if "old-k8s-version-023521" exists ...
	I1120 21:08:37.473260  204529 cli_runner.go:164] Run: docker container inspect old-k8s-version-023521 --format={{.State.Status}}
	I1120 21:08:37.473492  204529 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-023521"
	I1120 21:08:37.473519  204529 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-023521"
	I1120 21:08:37.473859  204529 cli_runner.go:164] Run: docker container inspect old-k8s-version-023521 --format={{.State.Status}}
	I1120 21:08:37.477896  204529 out.go:179] * Verifying Kubernetes components...
	I1120 21:08:37.482822  204529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:08:37.533601  204529 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-023521"
	I1120 21:08:37.533643  204529 host.go:66] Checking if "old-k8s-version-023521" exists ...
	I1120 21:08:37.534175  204529 cli_runner.go:164] Run: docker container inspect old-k8s-version-023521 --format={{.State.Status}}
	I1120 21:08:37.536660  204529 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:08:37.540858  204529 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:08:37.540903  204529 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 21:08:37.540994  204529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023521
	I1120 21:08:37.586264  204529 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 21:08:37.586303  204529 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 21:08:37.586377  204529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023521
	I1120 21:08:37.602100  204529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/old-k8s-version-023521/id_rsa Username:docker}
	I1120 21:08:37.657634  204529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/old-k8s-version-023521/id_rsa Username:docker}
	I1120 21:08:37.810164  204529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 21:08:37.888519  204529 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:08:37.917364  204529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:08:38.020901  204529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 21:08:38.916486  204529 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.106205164s)
	I1120 21:08:38.916512  204529 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1120 21:08:38.917736  204529 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.029121935s)
	I1120 21:08:38.918736  204529 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-023521" to be "Ready" ...
	I1120 21:08:39.085468  204529 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.064517521s)
	I1120 21:08:39.085801  204529 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.168301337s)
	I1120 21:08:39.113167  204529 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1120 21:08:39.116067  204529 addons.go:515] duration metric: took 1.644018031s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1120 21:08:39.420241  204529 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-023521" context rescaled to 1 replicas
	W1120 21:08:40.924590  204529 node_ready.go:57] node "old-k8s-version-023521" has "Ready":"False" status (will retry)
	W1120 21:08:43.422708  204529 node_ready.go:57] node "old-k8s-version-023521" has "Ready":"False" status (will retry)
	W1120 21:08:45.922193  204529 node_ready.go:57] node "old-k8s-version-023521" has "Ready":"False" status (will retry)
	W1120 21:08:48.422495  204529 node_ready.go:57] node "old-k8s-version-023521" has "Ready":"False" status (will retry)
	I1120 21:08:50.922229  204529 node_ready.go:49] node "old-k8s-version-023521" is "Ready"
	I1120 21:08:50.922257  204529 node_ready.go:38] duration metric: took 12.003468989s for node "old-k8s-version-023521" to be "Ready" ...
	I1120 21:08:50.922271  204529 api_server.go:52] waiting for apiserver process to appear ...
	I1120 21:08:50.922330  204529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:08:50.934728  204529 api_server.go:72] duration metric: took 13.463279551s to wait for apiserver process to appear ...
	I1120 21:08:50.934751  204529 api_server.go:88] waiting for apiserver healthz status ...
	I1120 21:08:50.934768  204529 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1120 21:08:50.943430  204529 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1120 21:08:50.944847  204529 api_server.go:141] control plane version: v1.28.0
	I1120 21:08:50.944872  204529 api_server.go:131] duration metric: took 10.114133ms to wait for apiserver health ...
	I1120 21:08:50.944881  204529 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:08:50.949020  204529 system_pods.go:59] 8 kube-system pods found
	I1120 21:08:50.949059  204529 system_pods.go:61] "coredns-5dd5756b68-wkdjm" [63ea4694-f3ea-4d95-8c6e-98a67aecaf2c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:08:50.949067  204529 system_pods.go:61] "etcd-old-k8s-version-023521" [db607596-46f9-4e1d-a770-b2cb9d3955bd] Running
	I1120 21:08:50.949073  204529 system_pods.go:61] "kindnet-n8fg9" [40f2fe0c-6d5d-4c96-bd2b-37d57c495e0a] Running
	I1120 21:08:50.949078  204529 system_pods.go:61] "kube-apiserver-old-k8s-version-023521" [dd90e3c4-9f74-414e-b99c-1090a4e22dea] Running
	I1120 21:08:50.949083  204529 system_pods.go:61] "kube-controller-manager-old-k8s-version-023521" [453d1d0a-b5fb-4a43-95b4-104fa035b2f6] Running
	I1120 21:08:50.949094  204529 system_pods.go:61] "kube-proxy-9zkv2" [33f7c1e7-cffe-4f30-ba0e-5e494a195fb4] Running
	I1120 21:08:50.949098  204529 system_pods.go:61] "kube-scheduler-old-k8s-version-023521" [d76be00a-70d5-45c3-8a1e-d1e95c187a1f] Running
	I1120 21:08:50.949112  204529 system_pods.go:61] "storage-provisioner" [b53fd876-cd07-4d74-9ca4-925ee07956a3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:08:50.949123  204529 system_pods.go:74] duration metric: took 4.23542ms to wait for pod list to return data ...
	I1120 21:08:50.949132  204529 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:08:50.951875  204529 default_sa.go:45] found service account: "default"
	I1120 21:08:50.951897  204529 default_sa.go:55] duration metric: took 2.75816ms for default service account to be created ...
	I1120 21:08:50.951907  204529 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 21:08:50.955607  204529 system_pods.go:86] 8 kube-system pods found
	I1120 21:08:50.955641  204529 system_pods.go:89] "coredns-5dd5756b68-wkdjm" [63ea4694-f3ea-4d95-8c6e-98a67aecaf2c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:08:50.955647  204529 system_pods.go:89] "etcd-old-k8s-version-023521" [db607596-46f9-4e1d-a770-b2cb9d3955bd] Running
	I1120 21:08:50.955653  204529 system_pods.go:89] "kindnet-n8fg9" [40f2fe0c-6d5d-4c96-bd2b-37d57c495e0a] Running
	I1120 21:08:50.955657  204529 system_pods.go:89] "kube-apiserver-old-k8s-version-023521" [dd90e3c4-9f74-414e-b99c-1090a4e22dea] Running
	I1120 21:08:50.955662  204529 system_pods.go:89] "kube-controller-manager-old-k8s-version-023521" [453d1d0a-b5fb-4a43-95b4-104fa035b2f6] Running
	I1120 21:08:50.955666  204529 system_pods.go:89] "kube-proxy-9zkv2" [33f7c1e7-cffe-4f30-ba0e-5e494a195fb4] Running
	I1120 21:08:50.955671  204529 system_pods.go:89] "kube-scheduler-old-k8s-version-023521" [d76be00a-70d5-45c3-8a1e-d1e95c187a1f] Running
	I1120 21:08:50.955676  204529 system_pods.go:89] "storage-provisioner" [b53fd876-cd07-4d74-9ca4-925ee07956a3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:08:50.955703  204529 retry.go:31] will retry after 281.893794ms: missing components: kube-dns
	I1120 21:08:51.248739  204529 system_pods.go:86] 8 kube-system pods found
	I1120 21:08:51.248788  204529 system_pods.go:89] "coredns-5dd5756b68-wkdjm" [63ea4694-f3ea-4d95-8c6e-98a67aecaf2c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:08:51.248796  204529 system_pods.go:89] "etcd-old-k8s-version-023521" [db607596-46f9-4e1d-a770-b2cb9d3955bd] Running
	I1120 21:08:51.248804  204529 system_pods.go:89] "kindnet-n8fg9" [40f2fe0c-6d5d-4c96-bd2b-37d57c495e0a] Running
	I1120 21:08:51.248816  204529 system_pods.go:89] "kube-apiserver-old-k8s-version-023521" [dd90e3c4-9f74-414e-b99c-1090a4e22dea] Running
	I1120 21:08:51.248821  204529 system_pods.go:89] "kube-controller-manager-old-k8s-version-023521" [453d1d0a-b5fb-4a43-95b4-104fa035b2f6] Running
	I1120 21:08:51.248825  204529 system_pods.go:89] "kube-proxy-9zkv2" [33f7c1e7-cffe-4f30-ba0e-5e494a195fb4] Running
	I1120 21:08:51.248831  204529 system_pods.go:89] "kube-scheduler-old-k8s-version-023521" [d76be00a-70d5-45c3-8a1e-d1e95c187a1f] Running
	I1120 21:08:51.248840  204529 system_pods.go:89] "storage-provisioner" [b53fd876-cd07-4d74-9ca4-925ee07956a3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:08:51.248858  204529 retry.go:31] will retry after 387.97064ms: missing components: kube-dns
	I1120 21:08:51.640865  204529 system_pods.go:86] 8 kube-system pods found
	I1120 21:08:51.640904  204529 system_pods.go:89] "coredns-5dd5756b68-wkdjm" [63ea4694-f3ea-4d95-8c6e-98a67aecaf2c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:08:51.640912  204529 system_pods.go:89] "etcd-old-k8s-version-023521" [db607596-46f9-4e1d-a770-b2cb9d3955bd] Running
	I1120 21:08:51.640917  204529 system_pods.go:89] "kindnet-n8fg9" [40f2fe0c-6d5d-4c96-bd2b-37d57c495e0a] Running
	I1120 21:08:51.640922  204529 system_pods.go:89] "kube-apiserver-old-k8s-version-023521" [dd90e3c4-9f74-414e-b99c-1090a4e22dea] Running
	I1120 21:08:51.640927  204529 system_pods.go:89] "kube-controller-manager-old-k8s-version-023521" [453d1d0a-b5fb-4a43-95b4-104fa035b2f6] Running
	I1120 21:08:51.640933  204529 system_pods.go:89] "kube-proxy-9zkv2" [33f7c1e7-cffe-4f30-ba0e-5e494a195fb4] Running
	I1120 21:08:51.640937  204529 system_pods.go:89] "kube-scheduler-old-k8s-version-023521" [d76be00a-70d5-45c3-8a1e-d1e95c187a1f] Running
	I1120 21:08:51.640941  204529 system_pods.go:89] "storage-provisioner" [b53fd876-cd07-4d74-9ca4-925ee07956a3] Running
	I1120 21:08:51.640949  204529 system_pods.go:126] duration metric: took 689.036121ms to wait for k8s-apps to be running ...
	I1120 21:08:51.640962  204529 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:08:51.641026  204529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:08:51.655371  204529 system_svc.go:56] duration metric: took 14.399252ms WaitForService to wait for kubelet
	I1120 21:08:51.655450  204529 kubeadm.go:587] duration metric: took 14.18400577s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:08:51.655483  204529 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:08:51.658633  204529 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:08:51.658667  204529 node_conditions.go:123] node cpu capacity is 2
	I1120 21:08:51.658683  204529 node_conditions.go:105] duration metric: took 3.161284ms to run NodePressure ...
	I1120 21:08:51.658696  204529 start.go:242] waiting for startup goroutines ...
	I1120 21:08:51.658732  204529 start.go:247] waiting for cluster config update ...
	I1120 21:08:51.658752  204529 start.go:256] writing updated cluster config ...
	I1120 21:08:51.659053  204529 ssh_runner.go:195] Run: rm -f paused
	I1120 21:08:51.663208  204529 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:08:51.667537  204529 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-wkdjm" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:08:52.673337  204529 pod_ready.go:94] pod "coredns-5dd5756b68-wkdjm" is "Ready"
	I1120 21:08:52.673366  204529 pod_ready.go:86] duration metric: took 1.005803212s for pod "coredns-5dd5756b68-wkdjm" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:08:52.676978  204529 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-023521" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:08:52.682317  204529 pod_ready.go:94] pod "etcd-old-k8s-version-023521" is "Ready"
	I1120 21:08:52.682402  204529 pod_ready.go:86] duration metric: took 5.398591ms for pod "etcd-old-k8s-version-023521" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:08:52.685821  204529 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-023521" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:08:52.691092  204529 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-023521" is "Ready"
	I1120 21:08:52.691132  204529 pod_ready.go:86] duration metric: took 5.286211ms for pod "kube-apiserver-old-k8s-version-023521" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:08:52.697169  204529 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-023521" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:08:52.871265  204529 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-023521" is "Ready"
	I1120 21:08:52.871290  204529 pod_ready.go:86] duration metric: took 174.088176ms for pod "kube-controller-manager-old-k8s-version-023521" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:08:53.072481  204529 pod_ready.go:83] waiting for pod "kube-proxy-9zkv2" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:08:53.471325  204529 pod_ready.go:94] pod "kube-proxy-9zkv2" is "Ready"
	I1120 21:08:53.471355  204529 pod_ready.go:86] duration metric: took 398.847773ms for pod "kube-proxy-9zkv2" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:08:53.672400  204529 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-023521" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:08:54.071117  204529 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-023521" is "Ready"
	I1120 21:08:54.071151  204529 pod_ready.go:86] duration metric: took 398.725211ms for pod "kube-scheduler-old-k8s-version-023521" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:08:54.071164  204529 pod_ready.go:40] duration metric: took 2.407922167s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:08:54.130898  204529 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1120 21:08:54.134333  204529 out.go:203] 
	W1120 21:08:54.137535  204529 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1120 21:08:54.140518  204529 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1120 21:08:54.144230  204529 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-023521" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	e04cc9fd5efbe       1611cd07b61d5       9 seconds ago       Running             busybox                   0                   e535b980cdcfb       busybox                                          default
	f8d3ed91c9212       97e04611ad434       15 seconds ago      Running             coredns                   0                   ed7c26a7b7ae3       coredns-5dd5756b68-wkdjm                         kube-system
	44e2db88a94d2       ba04bb24b9575       15 seconds ago      Running             storage-provisioner       0                   326766b782866       storage-provisioner                              kube-system
	1a68da234c92d       b1a8c6f707935       26 seconds ago      Running             kindnet-cni               0                   1c80c06b261cc       kindnet-n8fg9                                    kube-system
	2f225892ac178       940f54a5bcae9       28 seconds ago      Running             kube-proxy                0                   cc7515c0fb8e0       kube-proxy-9zkv2                                 kube-system
	ec6898b6bdca7       46cc66ccc7c19       49 seconds ago      Running             kube-controller-manager   0                   dd887a12be721       kube-controller-manager-old-k8s-version-023521   kube-system
	a1fecef7703c0       762dce4090c5f       49 seconds ago      Running             kube-scheduler            0                   48c921dde1e30       kube-scheduler-old-k8s-version-023521            kube-system
	926b6b2ac3f4e       9cdd6470f48c8       49 seconds ago      Running             etcd                      0                   5a4b5791116f7       etcd-old-k8s-version-023521                      kube-system
	3838205c7f6e1       00543d2fe5d71       49 seconds ago      Running             kube-apiserver            0                   aa65269d165e6       kube-apiserver-old-k8s-version-023521            kube-system
	
	
	==> containerd <==
	Nov 20 21:08:51 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:51.301655479Z" level=info msg="connecting to shim 44e2db88a94d2e237c818e4da098823b2a70fc76be486872d1c574b4027fbb32" address="unix:///run/containerd/s/c939db798c5f671f6494019542467d44e82202ef9fc87bbda2d5eb2cd698d7f2" protocol=ttrpc version=3
	Nov 20 21:08:51 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:51.305676035Z" level=info msg="CreateContainer within sandbox \"ed7c26a7b7ae3e95daeef87dddb800d013e623fb9e4adbea7be4fa59a0a4d06c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 20 21:08:51 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:51.323636028Z" level=info msg="Container f8d3ed91c92120ebc6cdee4a1cae90ecc19d703c73bfd991ef98f4f6382f468f: CDI devices from CRI Config.CDIDevices: []"
	Nov 20 21:08:51 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:51.337005954Z" level=info msg="CreateContainer within sandbox \"ed7c26a7b7ae3e95daeef87dddb800d013e623fb9e4adbea7be4fa59a0a4d06c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f8d3ed91c92120ebc6cdee4a1cae90ecc19d703c73bfd991ef98f4f6382f468f\""
	Nov 20 21:08:51 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:51.338769200Z" level=info msg="StartContainer for \"f8d3ed91c92120ebc6cdee4a1cae90ecc19d703c73bfd991ef98f4f6382f468f\""
	Nov 20 21:08:51 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:51.343016345Z" level=info msg="connecting to shim f8d3ed91c92120ebc6cdee4a1cae90ecc19d703c73bfd991ef98f4f6382f468f" address="unix:///run/containerd/s/99c95334820a3abcda097e97b15e7269c0808203ba77a99301c9c2df78f3c29d" protocol=ttrpc version=3
	Nov 20 21:08:51 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:51.397032106Z" level=info msg="StartContainer for \"44e2db88a94d2e237c818e4da098823b2a70fc76be486872d1c574b4027fbb32\" returns successfully"
	Nov 20 21:08:51 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:51.449601499Z" level=info msg="StartContainer for \"f8d3ed91c92120ebc6cdee4a1cae90ecc19d703c73bfd991ef98f4f6382f468f\" returns successfully"
	Nov 20 21:08:54 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:54.676260743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:9efbd2b5-b6e4-4170-a68d-a23aed850439,Namespace:default,Attempt:0,}"
	Nov 20 21:08:54 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:54.761305036Z" level=info msg="connecting to shim e535b980cdcfb69dcbad021109e901efbb7bfdd1e8e8e113951b4900cc45808f" address="unix:///run/containerd/s/ec9644d8d906904c60650aa1c4483207993c9aa6824d780964f0387e80568c7a" namespace=k8s.io protocol=ttrpc version=3
	Nov 20 21:08:54 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:54.824733553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:9efbd2b5-b6e4-4170-a68d-a23aed850439,Namespace:default,Attempt:0,} returns sandbox id \"e535b980cdcfb69dcbad021109e901efbb7bfdd1e8e8e113951b4900cc45808f\""
	Nov 20 21:08:54 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:54.830879840Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 20 21:08:56 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:56.970158963Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 21:08:56 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:56.972073826Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937184"
	Nov 20 21:08:56 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:56.974852393Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 21:08:56 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:56.977882614Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 21:08:56 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:56.978271429Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.147188345s"
	Nov 20 21:08:56 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:56.978314047Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 20 21:08:56 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:56.982421399Z" level=info msg="CreateContainer within sandbox \"e535b980cdcfb69dcbad021109e901efbb7bfdd1e8e8e113951b4900cc45808f\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 20 21:08:56 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:56.995877046Z" level=info msg="Container e04cc9fd5efbe35285fddb9302d2e927d66ed7cf7d89da5df9f56b8803aedcd2: CDI devices from CRI Config.CDIDevices: []"
	Nov 20 21:08:57 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:57.007782290Z" level=info msg="CreateContainer within sandbox \"e535b980cdcfb69dcbad021109e901efbb7bfdd1e8e8e113951b4900cc45808f\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"e04cc9fd5efbe35285fddb9302d2e927d66ed7cf7d89da5df9f56b8803aedcd2\""
	Nov 20 21:08:57 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:57.008589886Z" level=info msg="StartContainer for \"e04cc9fd5efbe35285fddb9302d2e927d66ed7cf7d89da5df9f56b8803aedcd2\""
	Nov 20 21:08:57 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:57.009691180Z" level=info msg="connecting to shim e04cc9fd5efbe35285fddb9302d2e927d66ed7cf7d89da5df9f56b8803aedcd2" address="unix:///run/containerd/s/ec9644d8d906904c60650aa1c4483207993c9aa6824d780964f0387e80568c7a" protocol=ttrpc version=3
	Nov 20 21:08:57 old-k8s-version-023521 containerd[756]: time="2025-11-20T21:08:57.077826429Z" level=info msg="StartContainer for \"e04cc9fd5efbe35285fddb9302d2e927d66ed7cf7d89da5df9f56b8803aedcd2\" returns successfully"
	Nov 20 21:09:03 old-k8s-version-023521 containerd[756]: E1120 21:09:03.578422     756 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [f8d3ed91c92120ebc6cdee4a1cae90ecc19d703c73bfd991ef98f4f6382f468f] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55717 - 58159 "HINFO IN 2254010911663685545.5882765781244882875. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.039646393s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-023521
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-023521
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=old-k8s-version-023521
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_08_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:08:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-023521
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:09:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:08:55 +0000   Thu, 20 Nov 2025 21:08:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:08:55 +0000   Thu, 20 Nov 2025 21:08:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:08:55 +0000   Thu, 20 Nov 2025 21:08:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:08:55 +0000   Thu, 20 Nov 2025 21:08:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-023521
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                b55587fb-b894-4020-a3f5-9d7b089d08c4
	  Boot ID:                    0cc3a06a-788d-45d4-8fff-2131330a9ee0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-5dd5756b68-wkdjm                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     30s
	  kube-system                 etcd-old-k8s-version-023521                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         42s
	  kube-system                 kindnet-n8fg9                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-old-k8s-version-023521             250m (12%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-controller-manager-old-k8s-version-023521    200m (10%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-proxy-9zkv2                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-old-k8s-version-023521             100m (5%)     0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  NodeHasSufficientMemory  51s (x8 over 51s)  kubelet          Node old-k8s-version-023521 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    51s (x8 over 51s)  kubelet          Node old-k8s-version-023521 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     51s (x7 over 51s)  kubelet          Node old-k8s-version-023521 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  51s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 43s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s                kubelet          Node old-k8s-version-023521 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s                kubelet          Node old-k8s-version-023521 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s                kubelet          Node old-k8s-version-023521 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  43s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           30s                node-controller  Node old-k8s-version-023521 event: Registered Node old-k8s-version-023521 in Controller
	  Normal  NodeReady                17s                kubelet          Node old-k8s-version-023521 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov20 20:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014399] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.498138] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.765613] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.782554] kauditd_printk_skb: 36 callbacks suppressed
	[Nov20 20:40] hrtimer: interrupt took 1888672 ns
	
	
	==> etcd [926b6b2ac3f4e183407145e23efc6c8775e0197125017db50628bd815b69e43a] <==
	{"level":"info","ts":"2025-11-20T21:08:17.357843Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-20T21:08:17.358058Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-20T21:08:17.369389Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-20T21:08:17.369414Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-20T21:08:17.369343Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-20T21:08:17.370256Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-20T21:08:17.370285Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-20T21:08:17.502541Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-20T21:08:17.50269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-20T21:08:17.502753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-11-20T21:08:17.502794Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-11-20T21:08:17.502859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-20T21:08:17.502912Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-11-20T21:08:17.502963Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-20T21:08:17.506643Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-023521 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-20T21:08:17.506862Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-20T21:08:17.507044Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-20T21:08:17.50815Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-20T21:08:17.506912Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-20T21:08:17.5088Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-20T21:08:17.509435Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-20T21:08:17.509646Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-20T21:08:17.510733Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-20T21:08:17.566546Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-20T21:08:17.566791Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 21:09:07 up 51 min,  0 user,  load average: 2.35, 3.21, 2.75
	Linux old-k8s-version-023521 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1a68da234c92d87fe620807404b7f86d355dbceb31d217f95048b507faeeb5fb] <==
	I1120 21:08:40.525122       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 21:08:40.525726       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1120 21:08:40.526025       1 main.go:148] setting mtu 1500 for CNI 
	I1120 21:08:40.526041       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 21:08:40.526178       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T21:08:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 21:08:40.727806       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 21:08:40.727913       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 21:08:40.727970       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 21:08:40.819453       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1120 21:08:41.028356       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 21:08:41.028387       1 metrics.go:72] Registering metrics
	I1120 21:08:41.028608       1 controller.go:711] "Syncing nftables rules"
	I1120 21:08:50.731130       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 21:08:50.731198       1 main.go:301] handling current node
	I1120 21:09:00.727385       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 21:09:00.727417       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3838205c7f6e1cc00448237ab227db4e52352ac7f000353d4eebf62e34dd087c] <==
	I1120 21:08:21.129294       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1120 21:08:21.129448       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1120 21:08:21.135173       1 controller.go:624] quota admission added evaluator for: namespaces
	I1120 21:08:21.150980       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:08:21.152974       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1120 21:08:21.153393       1 aggregator.go:166] initial CRD sync complete...
	I1120 21:08:21.153532       1 autoregister_controller.go:141] Starting autoregister controller
	I1120 21:08:21.153621       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 21:08:21.153716       1 cache.go:39] Caches are synced for autoregister controller
	I1120 21:08:21.180508       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1120 21:08:21.934529       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1120 21:08:21.942592       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1120 21:08:21.942625       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 21:08:22.596122       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 21:08:22.654509       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 21:08:22.757958       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1120 21:08:22.770312       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1120 21:08:22.771800       1 controller.go:624] quota admission added evaluator for: endpoints
	I1120 21:08:22.777081       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 21:08:22.982263       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1120 21:08:24.286242       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1120 21:08:24.304705       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1120 21:08:24.317230       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1120 21:08:37.187593       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1120 21:08:37.278082       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [ec6898b6bdca72f11c2c4d2ef9c3abb8773c568887034f35c57a3dc6c25aa058] <==
	I1120 21:08:37.249055       1 shared_informer.go:318] Caches are synced for resource quota
	I1120 21:08:37.252979       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-n8fg9"
	I1120 21:08:37.253301       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9zkv2"
	I1120 21:08:37.302559       1 shared_informer.go:318] Caches are synced for attach detach
	I1120 21:08:37.335913       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1120 21:08:37.444687       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-kxxv2"
	I1120 21:08:37.513981       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-wkdjm"
	I1120 21:08:37.571287       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="239.650414ms"
	I1120 21:08:37.588038       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.701352ms"
	I1120 21:08:37.629074       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="40.987424ms"
	I1120 21:08:37.629223       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.576µs"
	I1120 21:08:37.656124       1 shared_informer.go:318] Caches are synced for garbage collector
	I1120 21:08:37.656156       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1120 21:08:37.656483       1 shared_informer.go:318] Caches are synced for garbage collector
	I1120 21:08:39.010372       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1120 21:08:39.040385       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-kxxv2"
	I1120 21:08:39.055461       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="48.011478ms"
	I1120 21:08:39.071704       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.152043ms"
	I1120 21:08:39.071968       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="139.826µs"
	I1120 21:08:50.825109       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.158µs"
	I1120 21:08:50.857892       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="65.863µs"
	I1120 21:08:51.608706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.134µs"
	I1120 21:08:52.074909       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1120 21:08:52.618658       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.730237ms"
	I1120 21:08:52.619605       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.667µs"
	
	
	==> kube-proxy [2f225892ac17809d1001502c59aa33c3daa1a4fbf4e2366e33db57fbbd0826f8] <==
	I1120 21:08:38.495387       1 server_others.go:69] "Using iptables proxy"
	I1120 21:08:38.517177       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1120 21:08:38.573772       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:08:38.578240       1 server_others.go:152] "Using iptables Proxier"
	I1120 21:08:38.578279       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1120 21:08:38.578286       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1120 21:08:38.578319       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1120 21:08:38.578720       1 server.go:846] "Version info" version="v1.28.0"
	I1120 21:08:38.578732       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:08:38.579562       1 config.go:188] "Starting service config controller"
	I1120 21:08:38.579586       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1120 21:08:38.579656       1 config.go:97] "Starting endpoint slice config controller"
	I1120 21:08:38.579662       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1120 21:08:38.580189       1 config.go:315] "Starting node config controller"
	I1120 21:08:38.580197       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1120 21:08:38.680113       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1120 21:08:38.680161       1 shared_informer.go:318] Caches are synced for service config
	I1120 21:08:38.680387       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [a1fecef7703c09c58fed023c3691cc0d3341e668da394331a0b24fcff4824062] <==
	W1120 21:08:21.732737       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1120 21:08:21.732857       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1120 21:08:21.733020       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1120 21:08:21.733041       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1120 21:08:21.733231       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1120 21:08:21.733345       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1120 21:08:21.735700       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1120 21:08:21.735738       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1120 21:08:21.735893       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1120 21:08:21.735907       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1120 21:08:21.736197       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1120 21:08:21.736214       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1120 21:08:21.736266       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1120 21:08:21.736275       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1120 21:08:21.736604       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1120 21:08:21.736616       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1120 21:08:21.737055       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1120 21:08:21.737068       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1120 21:08:21.737136       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1120 21:08:21.737146       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1120 21:08:21.737204       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1120 21:08:21.737219       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1120 21:08:21.737363       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1120 21:08:21.737373       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1120 21:08:23.023618       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 20 21:08:37 old-k8s-version-023521 kubelet[1572]: I1120 21:08:37.199624    1572 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 20 21:08:37 old-k8s-version-023521 kubelet[1572]: I1120 21:08:37.201236    1572 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 20 21:08:37 old-k8s-version-023521 kubelet[1572]: I1120 21:08:37.257019    1572 topology_manager.go:215] "Topology Admit Handler" podUID="33f7c1e7-cffe-4f30-ba0e-5e494a195fb4" podNamespace="kube-system" podName="kube-proxy-9zkv2"
	Nov 20 21:08:37 old-k8s-version-023521 kubelet[1572]: I1120 21:08:37.275077    1572 topology_manager.go:215] "Topology Admit Handler" podUID="40f2fe0c-6d5d-4c96-bd2b-37d57c495e0a" podNamespace="kube-system" podName="kindnet-n8fg9"
	Nov 20 21:08:37 old-k8s-version-023521 kubelet[1572]: I1120 21:08:37.317077    1572 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/40f2fe0c-6d5d-4c96-bd2b-37d57c495e0a-cni-cfg\") pod \"kindnet-n8fg9\" (UID: \"40f2fe0c-6d5d-4c96-bd2b-37d57c495e0a\") " pod="kube-system/kindnet-n8fg9"
	Nov 20 21:08:37 old-k8s-version-023521 kubelet[1572]: I1120 21:08:37.317153    1572 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33f7c1e7-cffe-4f30-ba0e-5e494a195fb4-xtables-lock\") pod \"kube-proxy-9zkv2\" (UID: \"33f7c1e7-cffe-4f30-ba0e-5e494a195fb4\") " pod="kube-system/kube-proxy-9zkv2"
	Nov 20 21:08:37 old-k8s-version-023521 kubelet[1572]: I1120 21:08:37.317189    1572 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnv59\" (UniqueName: \"kubernetes.io/projected/33f7c1e7-cffe-4f30-ba0e-5e494a195fb4-kube-api-access-fnv59\") pod \"kube-proxy-9zkv2\" (UID: \"33f7c1e7-cffe-4f30-ba0e-5e494a195fb4\") " pod="kube-system/kube-proxy-9zkv2"
	Nov 20 21:08:37 old-k8s-version-023521 kubelet[1572]: I1120 21:08:37.317213    1572 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40f2fe0c-6d5d-4c96-bd2b-37d57c495e0a-lib-modules\") pod \"kindnet-n8fg9\" (UID: \"40f2fe0c-6d5d-4c96-bd2b-37d57c495e0a\") " pod="kube-system/kindnet-n8fg9"
	Nov 20 21:08:37 old-k8s-version-023521 kubelet[1572]: I1120 21:08:37.317243    1572 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfmkj\" (UniqueName: \"kubernetes.io/projected/40f2fe0c-6d5d-4c96-bd2b-37d57c495e0a-kube-api-access-cfmkj\") pod \"kindnet-n8fg9\" (UID: \"40f2fe0c-6d5d-4c96-bd2b-37d57c495e0a\") " pod="kube-system/kindnet-n8fg9"
	Nov 20 21:08:37 old-k8s-version-023521 kubelet[1572]: I1120 21:08:37.317282    1572 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/33f7c1e7-cffe-4f30-ba0e-5e494a195fb4-kube-proxy\") pod \"kube-proxy-9zkv2\" (UID: \"33f7c1e7-cffe-4f30-ba0e-5e494a195fb4\") " pod="kube-system/kube-proxy-9zkv2"
	Nov 20 21:08:37 old-k8s-version-023521 kubelet[1572]: I1120 21:08:37.317311    1572 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33f7c1e7-cffe-4f30-ba0e-5e494a195fb4-lib-modules\") pod \"kube-proxy-9zkv2\" (UID: \"33f7c1e7-cffe-4f30-ba0e-5e494a195fb4\") " pod="kube-system/kube-proxy-9zkv2"
	Nov 20 21:08:37 old-k8s-version-023521 kubelet[1572]: I1120 21:08:37.317337    1572 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40f2fe0c-6d5d-4c96-bd2b-37d57c495e0a-xtables-lock\") pod \"kindnet-n8fg9\" (UID: \"40f2fe0c-6d5d-4c96-bd2b-37d57c495e0a\") " pod="kube-system/kindnet-n8fg9"
	Nov 20 21:08:38 old-k8s-version-023521 kubelet[1572]: I1120 21:08:38.570841    1572 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9zkv2" podStartSLOduration=1.570767561 podCreationTimestamp="2025-11-20 21:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:08:38.567146331 +0000 UTC m=+14.325419869" watchObservedRunningTime="2025-11-20 21:08:38.570767561 +0000 UTC m=+14.329041107"
	Nov 20 21:08:44 old-k8s-version-023521 kubelet[1572]: I1120 21:08:44.460168    1572 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-n8fg9" podStartSLOduration=5.470600336 podCreationTimestamp="2025-11-20 21:08:37 +0000 UTC" firstStartedPulling="2025-11-20 21:08:38.180227122 +0000 UTC m=+13.938500659" lastFinishedPulling="2025-11-20 21:08:40.169752475 +0000 UTC m=+15.928026013" observedRunningTime="2025-11-20 21:08:40.578571284 +0000 UTC m=+16.336844822" watchObservedRunningTime="2025-11-20 21:08:44.46012569 +0000 UTC m=+20.218399228"
	Nov 20 21:08:50 old-k8s-version-023521 kubelet[1572]: I1120 21:08:50.783694    1572 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 20 21:08:50 old-k8s-version-023521 kubelet[1572]: I1120 21:08:50.815329    1572 topology_manager.go:215] "Topology Admit Handler" podUID="b53fd876-cd07-4d74-9ca4-925ee07956a3" podNamespace="kube-system" podName="storage-provisioner"
	Nov 20 21:08:50 old-k8s-version-023521 kubelet[1572]: I1120 21:08:50.824471    1572 topology_manager.go:215] "Topology Admit Handler" podUID="63ea4694-f3ea-4d95-8c6e-98a67aecaf2c" podNamespace="kube-system" podName="coredns-5dd5756b68-wkdjm"
	Nov 20 21:08:50 old-k8s-version-023521 kubelet[1572]: I1120 21:08:50.913292    1572 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63ea4694-f3ea-4d95-8c6e-98a67aecaf2c-config-volume\") pod \"coredns-5dd5756b68-wkdjm\" (UID: \"63ea4694-f3ea-4d95-8c6e-98a67aecaf2c\") " pod="kube-system/coredns-5dd5756b68-wkdjm"
	Nov 20 21:08:50 old-k8s-version-023521 kubelet[1572]: I1120 21:08:50.913366    1572 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mxf7\" (UniqueName: \"kubernetes.io/projected/b53fd876-cd07-4d74-9ca4-925ee07956a3-kube-api-access-9mxf7\") pod \"storage-provisioner\" (UID: \"b53fd876-cd07-4d74-9ca4-925ee07956a3\") " pod="kube-system/storage-provisioner"
	Nov 20 21:08:50 old-k8s-version-023521 kubelet[1572]: I1120 21:08:50.913404    1572 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b53fd876-cd07-4d74-9ca4-925ee07956a3-tmp\") pod \"storage-provisioner\" (UID: \"b53fd876-cd07-4d74-9ca4-925ee07956a3\") " pod="kube-system/storage-provisioner"
	Nov 20 21:08:50 old-k8s-version-023521 kubelet[1572]: I1120 21:08:50.913431    1572 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dwcm\" (UniqueName: \"kubernetes.io/projected/63ea4694-f3ea-4d95-8c6e-98a67aecaf2c-kube-api-access-4dwcm\") pod \"coredns-5dd5756b68-wkdjm\" (UID: \"63ea4694-f3ea-4d95-8c6e-98a67aecaf2c\") " pod="kube-system/coredns-5dd5756b68-wkdjm"
	Nov 20 21:08:51 old-k8s-version-023521 kubelet[1572]: I1120 21:08:51.630544    1572 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-wkdjm" podStartSLOduration=14.630419733 podCreationTimestamp="2025-11-20 21:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:08:51.609419762 +0000 UTC m=+27.367693300" watchObservedRunningTime="2025-11-20 21:08:51.630419733 +0000 UTC m=+27.388693287"
	Nov 20 21:08:52 old-k8s-version-023521 kubelet[1572]: I1120 21:08:52.603535    1572 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.603489155 podCreationTimestamp="2025-11-20 21:08:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:08:51.631536708 +0000 UTC m=+27.389810246" watchObservedRunningTime="2025-11-20 21:08:52.603489155 +0000 UTC m=+28.361762693"
	Nov 20 21:08:54 old-k8s-version-023521 kubelet[1572]: I1120 21:08:54.369743    1572 topology_manager.go:215] "Topology Admit Handler" podUID="9efbd2b5-b6e4-4170-a68d-a23aed850439" podNamespace="default" podName="busybox"
	Nov 20 21:08:54 old-k8s-version-023521 kubelet[1572]: I1120 21:08:54.444577    1572 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zgjv\" (UniqueName: \"kubernetes.io/projected/9efbd2b5-b6e4-4170-a68d-a23aed850439-kube-api-access-7zgjv\") pod \"busybox\" (UID: \"9efbd2b5-b6e4-4170-a68d-a23aed850439\") " pod="default/busybox"
	
	
	==> storage-provisioner [44e2db88a94d2e237c818e4da098823b2a70fc76be486872d1c574b4027fbb32] <==
	I1120 21:08:51.407613       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1120 21:08:51.441581       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 21:08:51.441717       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1120 21:08:51.468650       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 21:08:51.471037       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-023521_ce7a5da5-a297-466c-90b2-f74ac14dce09!
	I1120 21:08:51.474735       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"db4fb156-2957-4548-9aa8-7a3e0f9fb8ba", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-023521_ce7a5da5-a297-466c-90b2-f74ac14dce09 became leader
	I1120 21:08:51.572254       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-023521_ce7a5da5-a297-466c-90b2-f74ac14dce09!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-023521 -n old-k8s-version-023521
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-023521 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (13.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (14.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-882483 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3914dd0d-f188-4b9a-8dd2-72c422726597] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3914dd0d-f188-4b9a-8dd2-72c422726597] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.0036355s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-882483 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-882483
helpers_test.go:243: (dbg) docker inspect no-preload-882483:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1f0d2ad1dcb3759e146761e2c8578d4459fc993b4a4b3bc9532b7630f4912bdf",
	        "Created": "2025-11-20T21:10:26.010385926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213588,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:10:26.107513525Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/1f0d2ad1dcb3759e146761e2c8578d4459fc993b4a4b3bc9532b7630f4912bdf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1f0d2ad1dcb3759e146761e2c8578d4459fc993b4a4b3bc9532b7630f4912bdf/hostname",
	        "HostsPath": "/var/lib/docker/containers/1f0d2ad1dcb3759e146761e2c8578d4459fc993b4a4b3bc9532b7630f4912bdf/hosts",
	        "LogPath": "/var/lib/docker/containers/1f0d2ad1dcb3759e146761e2c8578d4459fc993b4a4b3bc9532b7630f4912bdf/1f0d2ad1dcb3759e146761e2c8578d4459fc993b4a4b3bc9532b7630f4912bdf-json.log",
	        "Name": "/no-preload-882483",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-882483:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-882483",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1f0d2ad1dcb3759e146761e2c8578d4459fc993b4a4b3bc9532b7630f4912bdf",
	                "LowerDir": "/var/lib/docker/overlay2/081a7dcffb05310abec02624633cefedd83b62ed44013ab8180d55a713ef8131-init/diff:/var/lib/docker/overlay2/5105da773b59b243b777c3c083d206b6a741bd11ebc5a0283799917fe36ebbb2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/081a7dcffb05310abec02624633cefedd83b62ed44013ab8180d55a713ef8131/merged",
	                "UpperDir": "/var/lib/docker/overlay2/081a7dcffb05310abec02624633cefedd83b62ed44013ab8180d55a713ef8131/diff",
	                "WorkDir": "/var/lib/docker/overlay2/081a7dcffb05310abec02624633cefedd83b62ed44013ab8180d55a713ef8131/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-882483",
	                "Source": "/var/lib/docker/volumes/no-preload-882483/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-882483",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-882483",
	                "name.minikube.sigs.k8s.io": "no-preload-882483",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1bd69978a310b14ac63f2b35c7992b572003e94bc8ce123766c440495956e954",
	            "SandboxKey": "/var/run/docker/netns/1bd69978a310",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-882483": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:56:79:89:22:4f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3914a1636d2aee4a414b22a4dfd645a85cd3facf5fdd8976d88ddaba212b7449",
	                    "EndpointID": "f1965385d05f51203f4933be889fcb940887b37e8f2fafef1a66e58a82c8086c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-882483",
	                        "1f0d2ad1dcb3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-882483 -n no-preload-882483
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-882483 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-882483 logs -n 25: (1.284451671s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-448616 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-982573                                                                                                                                                                                                                        │ kubernetes-upgrade-982573 │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │ 20 Nov 25 21:06 UTC │
	│ delete  │ -p cilium-448616                                                                                                                                                                                                                                    │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │ 20 Nov 25 21:06 UTC │
	│ start   │ -p force-systemd-env-444240 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-444240  │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │ 20 Nov 25 21:07 UTC │
	│ start   │ -p cert-expiration-339813 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-339813    │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │ 20 Nov 25 21:07 UTC │
	│ ssh     │ force-systemd-env-444240 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-444240  │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ delete  │ -p force-systemd-env-444240                                                                                                                                                                                                                         │ force-systemd-env-444240  │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ start   │ -p cert-options-530158 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-530158       │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ ssh     │ cert-options-530158 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-530158       │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ ssh     │ -p cert-options-530158 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-530158       │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ delete  │ -p cert-options-530158                                                                                                                                                                                                                              │ cert-options-530158       │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ start   │ -p old-k8s-version-023521 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-023521    │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:08 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-023521 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-023521    │ jenkins │ v1.37.0 │ 20 Nov 25 21:09 UTC │ 20 Nov 25 21:09 UTC │
	│ stop    │ -p old-k8s-version-023521 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-023521    │ jenkins │ v1.37.0 │ 20 Nov 25 21:09 UTC │ 20 Nov 25 21:09 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-023521 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-023521    │ jenkins │ v1.37.0 │ 20 Nov 25 21:09 UTC │ 20 Nov 25 21:09 UTC │
	│ start   │ -p old-k8s-version-023521 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-023521    │ jenkins │ v1.37.0 │ 20 Nov 25 21:09 UTC │ 20 Nov 25 21:10 UTC │
	│ start   │ -p cert-expiration-339813 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-339813    │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ delete  │ -p cert-expiration-339813                                                                                                                                                                                                                           │ cert-expiration-339813    │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ image   │ old-k8s-version-023521 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-023521    │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ start   │ -p no-preload-882483 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-882483         │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:11 UTC │
	│ pause   │ -p old-k8s-version-023521 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-023521    │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ unpause │ -p old-k8s-version-023521 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-023521    │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ delete  │ -p old-k8s-version-023521                                                                                                                                                                                                                           │ old-k8s-version-023521    │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ delete  │ -p old-k8s-version-023521                                                                                                                                                                                                                           │ old-k8s-version-023521    │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ start   │ -p embed-certs-121127 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-121127        │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:10:32
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:10:32.272234  215319 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:10:32.272427  215319 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:10:32.272454  215319 out.go:374] Setting ErrFile to fd 2...
	I1120 21:10:32.272477  215319 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:10:32.272794  215319 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-2300/.minikube/bin
	I1120 21:10:32.273270  215319 out.go:368] Setting JSON to false
	I1120 21:10:32.274261  215319 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3182,"bootTime":1763669851,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1120 21:10:32.274365  215319 start.go:143] virtualization:  
	I1120 21:10:32.278249  215319 out.go:179] * [embed-certs-121127] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 21:10:32.281536  215319 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:10:32.281590  215319 notify.go:221] Checking for updates...
	I1120 21:10:32.285312  215319 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:10:32.288516  215319 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-2300/kubeconfig
	I1120 21:10:32.291528  215319 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-2300/.minikube
	I1120 21:10:32.294538  215319 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 21:10:32.297547  215319 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:10:32.301075  215319 config.go:182] Loaded profile config "no-preload-882483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 21:10:32.301249  215319 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:10:32.333529  215319 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 21:10:32.333649  215319 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:10:32.417912  215319 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2025-11-20 21:10:32.407575159 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:10:32.418017  215319 docker.go:319] overlay module found
	I1120 21:10:32.421353  215319 out.go:179] * Using the docker driver based on user configuration
	I1120 21:10:32.424325  215319 start.go:309] selected driver: docker
	I1120 21:10:32.424349  215319 start.go:930] validating driver "docker" against <nil>
	I1120 21:10:32.424363  215319 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:10:32.425059  215319 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:10:32.513107  215319 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2025-11-20 21:10:32.502549025 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:10:32.513274  215319 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 21:10:32.513509  215319 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:10:32.516584  215319 out.go:179] * Using Docker driver with root privileges
	I1120 21:10:32.519406  215319 cni.go:84] Creating CNI manager for ""
	I1120 21:10:32.519478  215319 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 21:10:32.519490  215319 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1120 21:10:32.519579  215319 start.go:353] cluster config:
	{Name:embed-certs-121127 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-121127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:10:32.522717  215319 out.go:179] * Starting "embed-certs-121127" primary control-plane node in "embed-certs-121127" cluster
	I1120 21:10:32.525448  215319 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1120 21:10:32.528424  215319 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:10:32.531154  215319 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1120 21:10:32.531209  215319 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-2300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1120 21:10:32.531219  215319 cache.go:65] Caching tarball of preloaded images
	I1120 21:10:32.531306  215319 preload.go:238] Found /home/jenkins/minikube-integration/21923-2300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1120 21:10:32.531323  215319 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1120 21:10:32.531409  215319 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:10:32.531695  215319 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/config.json ...
	I1120 21:10:32.531724  215319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/config.json: {Name:mkf1caef776ab7651062c2e535c2c88870c5e983 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:32.569179  215319 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:10:32.569206  215319 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:10:32.569221  215319 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:10:32.569244  215319 start.go:360] acquireMachinesLock for embed-certs-121127: {Name:mk01ab0b00d92a3a57a2470bc1735436b9279226 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:10:32.569377  215319 start.go:364] duration metric: took 93.942µs to acquireMachinesLock for "embed-certs-121127"
	I1120 21:10:32.569411  215319 start.go:93] Provisioning new machine with config: &{Name:embed-certs-121127 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-121127 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1120 21:10:32.569485  215319 start.go:125] createHost starting for "" (driver="docker")
	I1120 21:10:30.823943  213043 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-882483
	
	I1120 21:10:30.823965  213043 ubuntu.go:182] provisioning hostname "no-preload-882483"
	I1120 21:10:30.824029  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:10:30.849688  213043 main.go:143] libmachine: Using SSH client type: native
	I1120 21:10:30.849989  213043 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1120 21:10:30.850000  213043 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-882483 && echo "no-preload-882483" | sudo tee /etc/hostname
	I1120 21:10:31.045383  213043 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-882483
	
	I1120 21:10:31.045498  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:10:31.066529  213043 main.go:143] libmachine: Using SSH client type: native
	I1120 21:10:31.066869  213043 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1120 21:10:31.066887  213043 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-882483' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-882483/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-882483' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:10:31.230853  213043 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:10:31.230882  213043 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-2300/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-2300/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-2300/.minikube}
	I1120 21:10:31.230914  213043 ubuntu.go:190] setting up certificates
	I1120 21:10:31.230923  213043 provision.go:84] configureAuth start
	I1120 21:10:31.230989  213043 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-882483
	I1120 21:10:31.268164  213043 provision.go:143] copyHostCerts
	I1120 21:10:31.268386  213043 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-2300/.minikube/ca.pem, removing ...
	I1120 21:10:31.268398  213043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-2300/.minikube/ca.pem
	I1120 21:10:31.268536  213043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-2300/.minikube/ca.pem (1078 bytes)
	I1120 21:10:31.268750  213043 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-2300/.minikube/cert.pem, removing ...
	I1120 21:10:31.268760  213043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-2300/.minikube/cert.pem
	I1120 21:10:31.268797  213043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-2300/.minikube/cert.pem (1123 bytes)
	I1120 21:10:31.268939  213043 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-2300/.minikube/key.pem, removing ...
	I1120 21:10:31.268949  213043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-2300/.minikube/key.pem
	I1120 21:10:31.269019  213043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-2300/.minikube/key.pem (1675 bytes)
	I1120 21:10:31.269128  213043 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-2300/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca-key.pem org=jenkins.no-preload-882483 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-882483]
	I1120 21:10:31.500851  213043 provision.go:177] copyRemoteCerts
	I1120 21:10:31.500966  213043 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:10:31.501044  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:10:31.526060  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/no-preload-882483/id_rsa Username:docker}
	I1120 21:10:31.631846  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:10:31.664345  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1120 21:10:31.689418  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 21:10:31.712693  213043 provision.go:87] duration metric: took 481.750269ms to configureAuth
	I1120 21:10:31.712717  213043 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:10:31.712893  213043 config.go:182] Loaded profile config "no-preload-882483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 21:10:31.712900  213043 machine.go:97] duration metric: took 4.087197444s to provisionDockerMachine
	I1120 21:10:31.712907  213043 client.go:176] duration metric: took 6.936840004s to LocalClient.Create
	I1120 21:10:31.712921  213043 start.go:167] duration metric: took 6.936944408s to libmachine.API.Create "no-preload-882483"
	I1120 21:10:31.712928  213043 start.go:293] postStartSetup for "no-preload-882483" (driver="docker")
	I1120 21:10:31.712936  213043 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:10:31.712989  213043 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:10:31.713029  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:10:31.733700  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/no-preload-882483/id_rsa Username:docker}
	I1120 21:10:31.878326  213043 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:10:31.883660  213043 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:10:31.883690  213043 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:10:31.883701  213043 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-2300/.minikube/addons for local assets ...
	I1120 21:10:31.883762  213043 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-2300/.minikube/files for local assets ...
	I1120 21:10:31.883840  213043 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-2300/.minikube/files/etc/ssl/certs/40892.pem -> 40892.pem in /etc/ssl/certs
	I1120 21:10:31.883944  213043 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:10:31.892942  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/files/etc/ssl/certs/40892.pem --> /etc/ssl/certs/40892.pem (1708 bytes)
	I1120 21:10:31.917654  213043 start.go:296] duration metric: took 204.712086ms for postStartSetup
	I1120 21:10:31.918000  213043 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-882483
	I1120 21:10:31.936123  213043 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/config.json ...
	I1120 21:10:31.936389  213043 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:10:31.936443  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:10:31.967982  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/no-preload-882483/id_rsa Username:docker}
	I1120 21:10:32.078669  213043 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:10:32.085996  213043 start.go:128] duration metric: took 7.314437317s to createHost
	I1120 21:10:32.086027  213043 start.go:83] releasing machines lock for "no-preload-882483", held for 7.314572515s
	I1120 21:10:32.086102  213043 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-882483
	I1120 21:10:32.106405  213043 ssh_runner.go:195] Run: cat /version.json
	I1120 21:10:32.106490  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:10:32.106537  213043 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:10:32.106624  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:10:32.142946  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/no-preload-882483/id_rsa Username:docker}
	I1120 21:10:32.159643  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/no-preload-882483/id_rsa Username:docker}
	I1120 21:10:32.355216  213043 ssh_runner.go:195] Run: systemctl --version
	I1120 21:10:32.362321  213043 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:10:32.370092  213043 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:10:32.370164  213043 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:10:32.401817  213043 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1120 21:10:32.401849  213043 start.go:496] detecting cgroup driver to use...
	I1120 21:10:32.401884  213043 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:10:32.401938  213043 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1120 21:10:32.422916  213043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1120 21:10:32.437483  213043 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:10:32.437558  213043 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:10:32.459930  213043 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:10:32.493465  213043 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:10:32.662678  213043 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:10:32.843704  213043 docker.go:234] disabling docker service ...
	I1120 21:10:32.843789  213043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:10:32.876917  213043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:10:32.895625  213043 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:10:33.089053  213043 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:10:33.275927  213043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:10:33.293194  213043 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:10:33.338648  213043 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1120 21:10:33.355936  213043 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1120 21:10:33.368759  213043 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1120 21:10:33.368844  213043 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1120 21:10:33.381875  213043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1120 21:10:33.395432  213043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1120 21:10:33.408098  213043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1120 21:10:33.427665  213043 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:10:33.446128  213043 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1120 21:10:33.466290  213043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1120 21:10:33.483959  213043 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1120 21:10:33.499683  213043 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:10:33.510700  213043 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:10:33.518581  213043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:10:33.709896  213043 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1120 21:10:33.822015  213043 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1120 21:10:33.822134  213043 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1120 21:10:33.826846  213043 start.go:564] Will wait 60s for crictl version
	I1120 21:10:33.826910  213043 ssh_runner.go:195] Run: which crictl
	I1120 21:10:33.831427  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:10:33.871229  213043 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1120 21:10:33.871292  213043 ssh_runner.go:195] Run: containerd --version
	I1120 21:10:33.891748  213043 ssh_runner.go:195] Run: containerd --version
	I1120 21:10:33.916967  213043 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1120 21:10:33.919885  213043 cli_runner.go:164] Run: docker network inspect no-preload-882483 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:10:33.936366  213043 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1120 21:10:33.945164  213043 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:10:33.956120  213043 kubeadm.go:884] updating cluster {Name:no-preload-882483 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-882483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:10:33.956230  213043 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1120 21:10:33.956282  213043 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:10:33.991508  213043 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1120 21:10:33.991530  213043 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1120 21:10:33.991565  213043 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:10:33.991798  213043 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1120 21:10:33.991887  213043 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 21:10:33.991969  213043 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1120 21:10:33.992048  213043 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1120 21:10:33.992124  213043 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1120 21:10:33.992204  213043 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1120 21:10:33.992287  213043 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1120 21:10:33.995739  213043 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1120 21:10:33.995864  213043 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 21:10:33.995919  213043 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1120 21:10:33.995969  213043 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:10:33.996007  213043 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1120 21:10:33.996052  213043 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1120 21:10:33.996098  213043 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1120 21:10:33.996136  213043 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1120 21:10:34.232651  213043 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1120 21:10:34.232766  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1120 21:10:34.242693  213043 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc"
	I1120 21:10:34.242790  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1120 21:10:34.243280  213043 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196"
	I1120 21:10:34.243352  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1120 21:10:34.250011  213043 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9"
	I1120 21:10:34.250142  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1120 21:10:34.250374  213043 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.4-0" and sha "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e"
	I1120 21:10:34.250534  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.4-0
	I1120 21:10:34.252139  213043 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a"
	I1120 21:10:34.252252  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 21:10:34.252157  213043 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0"
	I1120 21:10:34.252385  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1120 21:10:34.282502  213043 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1120 21:10:34.282609  213043 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1120 21:10:34.282670  213043 ssh_runner.go:195] Run: which crictl
	I1120 21:10:34.364616  213043 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1120 21:10:34.364808  213043 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1120 21:10:34.364896  213043 ssh_runner.go:195] Run: which crictl
	I1120 21:10:34.364714  213043 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1120 21:10:34.365013  213043 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1120 21:10:34.365042  213043 ssh_runner.go:195] Run: which crictl
	I1120 21:10:34.369043  213043 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1120 21:10:34.369083  213043 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1120 21:10:34.369145  213043 ssh_runner.go:195] Run: which crictl
	I1120 21:10:34.369202  213043 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1120 21:10:34.369215  213043 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1120 21:10:34.369234  213043 ssh_runner.go:195] Run: which crictl
	I1120 21:10:34.369556  213043 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1120 21:10:34.369581  213043 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1120 21:10:34.369613  213043 ssh_runner.go:195] Run: which crictl
	I1120 21:10:34.369661  213043 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1120 21:10:34.369674  213043 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 21:10:34.369693  213043 ssh_runner.go:195] Run: which crictl
	I1120 21:10:34.369764  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1120 21:10:34.395800  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1120 21:10:34.395860  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1120 21:10:34.395895  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1120 21:10:34.395942  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	W1120 21:10:34.396545  213043 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1120 21:10:34.396588  213043 retry.go:31] will retry after 175.374248ms: ssh: rejected: connect failed (open failed)
	I1120 21:10:34.418642  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1120 21:10:34.418788  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:10:34.418988  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1120 21:10:34.419068  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:10:34.419464  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 21:10:34.419549  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:10:32.573090  215319 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1120 21:10:32.573348  215319 start.go:159] libmachine.API.Create for "embed-certs-121127" (driver="docker")
	I1120 21:10:32.573382  215319 client.go:173] LocalClient.Create starting
	I1120 21:10:32.573462  215319 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem
	I1120 21:10:32.573519  215319 main.go:143] libmachine: Decoding PEM data...
	I1120 21:10:32.573540  215319 main.go:143] libmachine: Parsing certificate...
	I1120 21:10:32.573598  215319 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-2300/.minikube/certs/cert.pem
	I1120 21:10:32.573625  215319 main.go:143] libmachine: Decoding PEM data...
	I1120 21:10:32.573638  215319 main.go:143] libmachine: Parsing certificate...
	I1120 21:10:32.574005  215319 cli_runner.go:164] Run: docker network inspect embed-certs-121127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1120 21:10:32.593761  215319 cli_runner.go:211] docker network inspect embed-certs-121127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1120 21:10:32.593849  215319 network_create.go:284] running [docker network inspect embed-certs-121127] to gather additional debugging logs...
	I1120 21:10:32.593866  215319 cli_runner.go:164] Run: docker network inspect embed-certs-121127
	W1120 21:10:32.613669  215319 cli_runner.go:211] docker network inspect embed-certs-121127 returned with exit code 1
	I1120 21:10:32.613700  215319 network_create.go:287] error running [docker network inspect embed-certs-121127]: docker network inspect embed-certs-121127: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-121127 not found
	I1120 21:10:32.613712  215319 network_create.go:289] output of [docker network inspect embed-certs-121127]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-121127 not found
	
	** /stderr **
	I1120 21:10:32.613817  215319 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:10:32.635649  215319 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8f2399b7fac6 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:ce:e1:0f:d8:b1} reservation:<nil>}
	I1120 21:10:32.636010  215319 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-954bfb8e5d57 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:f3:60:ee:cc:b7} reservation:<nil>}
	I1120 21:10:32.636319  215319 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-02e4726a397e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:f0:04:c7:8f:fa} reservation:<nil>}
	I1120 21:10:32.636566  215319 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3914a1636d2a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:72:79:08:e4:0c:17} reservation:<nil>}
	I1120 21:10:32.636944  215319 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019ec6e0}
	I1120 21:10:32.636974  215319 network_create.go:124] attempt to create docker network embed-certs-121127 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1120 21:10:32.637039  215319 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-121127 embed-certs-121127
	I1120 21:10:32.714508  215319 network_create.go:108] docker network embed-certs-121127 192.168.85.0/24 created
	I1120 21:10:32.714543  215319 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-121127" container
	I1120 21:10:32.714637  215319 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1120 21:10:32.732682  215319 cli_runner.go:164] Run: docker volume create embed-certs-121127 --label name.minikube.sigs.k8s.io=embed-certs-121127 --label created_by.minikube.sigs.k8s.io=true
	I1120 21:10:32.756507  215319 oci.go:103] Successfully created a docker volume embed-certs-121127
	I1120 21:10:32.756594  215319 cli_runner.go:164] Run: docker run --rm --name embed-certs-121127-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-121127 --entrypoint /usr/bin/test -v embed-certs-121127:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1120 21:10:33.387177  215319 oci.go:107] Successfully prepared a docker volume embed-certs-121127
	I1120 21:10:33.387250  215319 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1120 21:10:33.387259  215319 kic.go:194] Starting extracting preloaded images to volume ...
	I1120 21:10:33.387331  215319 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-2300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-121127:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1120 21:10:34.459326  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/no-preload-882483/id_rsa Username:docker}
	I1120 21:10:34.459312  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/no-preload-882483/id_rsa Username:docker}
	I1120 21:10:34.469757  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/no-preload-882483/id_rsa Username:docker}
	I1120 21:10:34.543428  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1120 21:10:34.543515  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:10:34.572614  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/no-preload-882483/id_rsa Username:docker}
	I1120 21:10:34.596526  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1120 21:10:34.596616  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1120 21:10:34.768263  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1120 21:10:34.768390  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1120 21:10:34.768474  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1120 21:10:34.768555  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1120 21:10:34.913213  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1120 21:10:35.009057  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 21:10:35.044003  213043 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1120 21:10:35.044151  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1120 21:10:35.044203  213043 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1120 21:10:35.044500  213043 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1120 21:10:35.044227  213043 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1120 21:10:35.044565  213043 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1120 21:10:35.044283  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1120 21:10:35.044661  213043 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1120 21:10:35.090404  213043 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1120 21:10:35.090540  213043 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1120 21:10:35.134742  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 21:10:35.151636  213043 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1120 21:10:35.151682  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1120 21:10:35.151825  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1120 21:10:35.151904  213043 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1120 21:10:35.151933  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	W1120 21:10:35.170164  213043 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1120 21:10:35.170361  213043 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1120 21:10:35.170495  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:10:35.302893  213043 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1120 21:10:35.302939  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1120 21:10:35.303014  213043 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1120 21:10:35.303099  213043 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1120 21:10:35.303158  213043 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1120 21:10:35.303175  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1120 21:10:35.303244  213043 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1120 21:10:35.303299  213043 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1120 21:10:35.303359  213043 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1120 21:10:35.303420  213043 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	W1120 21:10:35.318458  213043 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1120 21:10:35.318494  213043 retry.go:31] will retry after 320.092741ms: ssh: rejected: connect failed (open failed)
	I1120 21:10:35.402542  213043 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1120 21:10:35.402627  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1120 21:10:35.402675  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:10:35.433428  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/no-preload-882483/id_rsa Username:docker}
	I1120 21:10:35.434774  213043 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1120 21:10:35.434811  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1120 21:10:35.434868  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:10:35.435061  213043 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1120 21:10:35.435084  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1120 21:10:35.435128  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:10:35.436184  213043 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1120 21:10:35.436246  213043 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:10:35.436293  213043 ssh_runner.go:195] Run: which crictl
	I1120 21:10:35.436364  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:10:35.496688  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/no-preload-882483/id_rsa Username:docker}
	I1120 21:10:35.502403  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/no-preload-882483/id_rsa Username:docker}
	I1120 21:10:35.522014  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/no-preload-882483/id_rsa Username:docker}
	I1120 21:10:35.999069  213043 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1120 21:10:35.999137  213043 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1120 21:10:35.999169  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1120 21:10:35.999284  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:10:36.372697  213043 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1120 21:10:36.372773  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1120 21:10:36.384259  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:10:39.441482  215319 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-2300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-121127:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (6.054103754s)
	I1120 21:10:39.441517  215319 kic.go:203] duration metric: took 6.054253885s to extract preloaded images to volume ...
	W1120 21:10:39.441650  215319 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1120 21:10:39.441778  215319 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1120 21:10:39.518129  215319 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-121127 --name embed-certs-121127 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-121127 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-121127 --network embed-certs-121127 --ip 192.168.85.2 --volume embed-certs-121127:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1120 21:10:39.885329  215319 cli_runner.go:164] Run: docker container inspect embed-certs-121127 --format={{.State.Running}}
	I1120 21:10:39.911983  215319 cli_runner.go:164] Run: docker container inspect embed-certs-121127 --format={{.State.Status}}
	I1120 21:10:39.942192  215319 cli_runner.go:164] Run: docker exec embed-certs-121127 stat /var/lib/dpkg/alternatives/iptables
	I1120 21:10:40.016882  215319 oci.go:144] the created container "embed-certs-121127" has a running status.
	I1120 21:10:40.016911  215319 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21923-2300/.minikube/machines/embed-certs-121127/id_rsa...
	I1120 21:10:40.545644  215319 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21923-2300/.minikube/machines/embed-certs-121127/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1120 21:10:40.577441  215319 cli_runner.go:164] Run: docker container inspect embed-certs-121127 --format={{.State.Status}}
	I1120 21:10:40.606642  215319 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1120 21:10:40.606662  215319 kic_runner.go:114] Args: [docker exec --privileged embed-certs-121127 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1120 21:10:40.698547  215319 cli_runner.go:164] Run: docker container inspect embed-certs-121127 --format={{.State.Status}}
	I1120 21:10:40.728147  215319 machine.go:94] provisionDockerMachine start ...
	I1120 21:10:40.728241  215319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-121127
	I1120 21:10:40.753894  215319 main.go:143] libmachine: Using SSH client type: native
	I1120 21:10:40.756191  215319 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1120 21:10:40.756212  215319 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:10:40.757811  215319 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 21:10:40.404163  213043 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (4.03136077s)
	I1120 21:10:40.404192  213043 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1120 21:10:40.404212  213043 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1120 21:10:40.404265  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1120 21:10:40.404354  213043 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.020070988s)
	I1120 21:10:40.404393  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:10:41.950235  213043 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.54581724s)
	I1120 21:10:41.950249  213043 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (1.545959765s)
	I1120 21:10:41.950265  213043 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1120 21:10:41.950282  213043 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1120 21:10:41.950283  213043 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1120 21:10:41.950341  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1
	I1120 21:10:41.950375  213043 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1120 21:10:43.404202  213043 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.453805888s)
	I1120 21:10:43.404216  213043 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1: (1.453855522s)
	I1120 21:10:43.404229  213043 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1120 21:10:43.404235  213043 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1120 21:10:43.404253  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1120 21:10:43.404262  213043 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1120 21:10:43.404305  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1120 21:10:44.415608  213043 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.011280814s)
	I1120 21:10:44.415633  213043 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1120 21:10:44.415653  213043 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1120 21:10:44.415699  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1120 21:10:43.906323  215319 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-121127
	
	I1120 21:10:43.906398  215319 ubuntu.go:182] provisioning hostname "embed-certs-121127"
	I1120 21:10:43.906523  215319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-121127
	I1120 21:10:43.926276  215319 main.go:143] libmachine: Using SSH client type: native
	I1120 21:10:43.926627  215319 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1120 21:10:43.926645  215319 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-121127 && echo "embed-certs-121127" | sudo tee /etc/hostname
	I1120 21:10:44.095040  215319 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-121127
	
	I1120 21:10:44.095145  215319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-121127
	I1120 21:10:44.122076  215319 main.go:143] libmachine: Using SSH client type: native
	I1120 21:10:44.122377  215319 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1120 21:10:44.122394  215319 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-121127' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-121127/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-121127' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:10:44.272543  215319 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:10:44.272616  215319 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-2300/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-2300/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-2300/.minikube}
	I1120 21:10:44.272648  215319 ubuntu.go:190] setting up certificates
	I1120 21:10:44.272694  215319 provision.go:84] configureAuth start
	I1120 21:10:44.272795  215319 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-121127
	I1120 21:10:44.295429  215319 provision.go:143] copyHostCerts
	I1120 21:10:44.295506  215319 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-2300/.minikube/ca.pem, removing ...
	I1120 21:10:44.295515  215319 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-2300/.minikube/ca.pem
	I1120 21:10:44.295590  215319 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-2300/.minikube/ca.pem (1078 bytes)
	I1120 21:10:44.295690  215319 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-2300/.minikube/cert.pem, removing ...
	I1120 21:10:44.295696  215319 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-2300/.minikube/cert.pem
	I1120 21:10:44.295721  215319 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-2300/.minikube/cert.pem (1123 bytes)
	I1120 21:10:44.295770  215319 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-2300/.minikube/key.pem, removing ...
	I1120 21:10:44.295774  215319 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-2300/.minikube/key.pem
	I1120 21:10:44.295797  215319 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-2300/.minikube/key.pem (1675 bytes)
	I1120 21:10:44.295842  215319 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-2300/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca-key.pem org=jenkins.embed-certs-121127 san=[127.0.0.1 192.168.85.2 embed-certs-121127 localhost minikube]
	I1120 21:10:44.963289  215319 provision.go:177] copyRemoteCerts
	I1120 21:10:44.963368  215319 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:10:44.963416  215319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-121127
	I1120 21:10:44.981983  215319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/embed-certs-121127/id_rsa Username:docker}
	I1120 21:10:45.114970  215319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1120 21:10:45.164130  215319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 21:10:45.266347  215319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:10:45.333899  215319 provision.go:87] duration metric: took 1.061173746s to configureAuth
	I1120 21:10:45.333993  215319 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:10:45.334254  215319 config.go:182] Loaded profile config "embed-certs-121127": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 21:10:45.334310  215319 machine.go:97] duration metric: took 4.606143409s to provisionDockerMachine
	I1120 21:10:45.334332  215319 client.go:176] duration metric: took 12.760939004s to LocalClient.Create
	I1120 21:10:45.334394  215319 start.go:167] duration metric: took 12.761046313s to libmachine.API.Create "embed-certs-121127"
	I1120 21:10:45.334423  215319 start.go:293] postStartSetup for "embed-certs-121127" (driver="docker")
	I1120 21:10:45.334466  215319 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:10:45.334557  215319 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:10:45.334632  215319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-121127
	I1120 21:10:45.362644  215319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/embed-certs-121127/id_rsa Username:docker}
	I1120 21:10:45.472580  215319 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:10:45.476699  215319 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:10:45.476780  215319 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:10:45.476813  215319 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-2300/.minikube/addons for local assets ...
	I1120 21:10:45.476891  215319 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-2300/.minikube/files for local assets ...
	I1120 21:10:45.477009  215319 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-2300/.minikube/files/etc/ssl/certs/40892.pem -> 40892.pem in /etc/ssl/certs
	I1120 21:10:45.477174  215319 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:10:45.485880  215319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/files/etc/ssl/certs/40892.pem --> /etc/ssl/certs/40892.pem (1708 bytes)
	I1120 21:10:45.521795  215319 start.go:296] duration metric: took 187.325152ms for postStartSetup
	I1120 21:10:45.522236  215319 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-121127
	I1120 21:10:45.541305  215319 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/config.json ...
	I1120 21:10:45.541583  215319 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:10:45.541627  215319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-121127
	I1120 21:10:45.560207  215319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/embed-certs-121127/id_rsa Username:docker}
	I1120 21:10:45.666038  215319 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:10:45.673583  215319 start.go:128] duration metric: took 13.104071835s to createHost
	I1120 21:10:45.673657  215319 start.go:83] releasing machines lock for "embed-certs-121127", held for 13.104264806s
	I1120 21:10:45.673772  215319 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-121127
	I1120 21:10:45.691364  215319 ssh_runner.go:195] Run: cat /version.json
	I1120 21:10:45.691417  215319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-121127
	I1120 21:10:45.691707  215319 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:10:45.691775  215319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-121127
	I1120 21:10:45.721473  215319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/embed-certs-121127/id_rsa Username:docker}
	I1120 21:10:45.721526  215319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/embed-certs-121127/id_rsa Username:docker}
	I1120 21:10:45.926136  215319 ssh_runner.go:195] Run: systemctl --version
	I1120 21:10:45.933647  215319 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:10:45.938244  215319 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:10:45.938311  215319 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:10:45.977247  215319 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1120 21:10:45.977275  215319 start.go:496] detecting cgroup driver to use...
	I1120 21:10:45.977309  215319 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:10:45.977358  215319 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1120 21:10:45.994321  215319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1120 21:10:46.012099  215319 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:10:46.012157  215319 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:10:46.041743  215319 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:10:46.071929  215319 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:10:46.211976  215319 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:10:46.376132  215319 docker.go:234] disabling docker service ...
	I1120 21:10:46.376211  215319 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:10:46.408443  215319 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:10:46.425330  215319 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:10:46.587808  215319 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:10:46.744357  215319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:10:46.763278  215319 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:10:46.782798  215319 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1120 21:10:46.793468  215319 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1120 21:10:46.805559  215319 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1120 21:10:46.805672  215319 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1120 21:10:46.817132  215319 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1120 21:10:46.828734  215319 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1120 21:10:46.839608  215319 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1120 21:10:46.850596  215319 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:10:46.862099  215319 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1120 21:10:46.875041  215319 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1120 21:10:46.885476  215319 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1120 21:10:46.899612  215319 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:10:46.908798  215319 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:10:46.917346  215319 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:10:47.072258  215319 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1120 21:10:47.275782  215319 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1120 21:10:47.275901  215319 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1120 21:10:47.281032  215319 start.go:564] Will wait 60s for crictl version
	I1120 21:10:47.281114  215319 ssh_runner.go:195] Run: which crictl
	I1120 21:10:47.285086  215319 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:10:47.329951  215319 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1120 21:10:47.330037  215319 ssh_runner.go:195] Run: containerd --version
	I1120 21:10:47.361602  215319 ssh_runner.go:195] Run: containerd --version
	I1120 21:10:47.389507  215319 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1120 21:10:45.870691  213043 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.45496921s)
	I1120 21:10:45.870720  213043 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1120 21:10:45.870741  213043 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1120 21:10:45.870787  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1120 21:10:47.392809  215319 cli_runner.go:164] Run: docker network inspect embed-certs-121127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:10:47.412640  215319 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1120 21:10:47.419211  215319 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:10:47.432953  215319 kubeadm.go:884] updating cluster {Name:embed-certs-121127 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-121127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:10:47.433071  215319 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1120 21:10:47.433139  215319 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:10:47.460258  215319 containerd.go:627] all images are preloaded for containerd runtime.
	I1120 21:10:47.460278  215319 containerd.go:534] Images already preloaded, skipping extraction
	I1120 21:10:47.460337  215319 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:10:47.489781  215319 containerd.go:627] all images are preloaded for containerd runtime.
	I1120 21:10:47.489865  215319 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:10:47.489887  215319 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1120 21:10:47.490042  215319 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-121127 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-121127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:10:47.490138  215319 ssh_runner.go:195] Run: sudo crictl info
	I1120 21:10:47.517409  215319 cni.go:84] Creating CNI manager for ""
	I1120 21:10:47.517467  215319 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 21:10:47.517481  215319 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 21:10:47.517503  215319 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-121127 NodeName:embed-certs-121127 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:10:47.517622  215319 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-121127"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:10:47.517689  215319 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:10:47.528700  215319 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:10:47.528768  215319 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 21:10:47.539896  215319 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1120 21:10:47.556476  215319 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:10:47.573801  215319 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1120 21:10:47.590490  215319 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1120 21:10:47.594962  215319 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:10:47.606556  215319 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:10:47.739301  215319 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:10:47.761249  215319 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127 for IP: 192.168.85.2
	I1120 21:10:47.761272  215319 certs.go:195] generating shared ca certs ...
	I1120 21:10:47.761288  215319 certs.go:227] acquiring lock for ca certs: {Name:mke329f4cdcc6bfc142b6fc6817600b3d33b3062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:47.761463  215319 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-2300/.minikube/ca.key
	I1120 21:10:47.761507  215319 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-2300/.minikube/proxy-client-ca.key
	I1120 21:10:47.761519  215319 certs.go:257] generating profile certs ...
	I1120 21:10:47.761589  215319 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/client.key
	I1120 21:10:47.761613  215319 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/client.crt with IP's: []
	I1120 21:10:48.111681  215319 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/client.crt ...
	I1120 21:10:48.111757  215319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/client.crt: {Name:mk41e49e5955215c92b66f29e111e723c695d93e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:48.111993  215319 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/client.key ...
	I1120 21:10:48.112028  215319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/client.key: {Name:mkcdb564eebad1869884c43fbb1e957ef4199a1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:48.112159  215319 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/apiserver.key.3d6a70a3
	I1120 21:10:48.112199  215319 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/apiserver.crt.3d6a70a3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1120 21:10:48.429809  215319 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/apiserver.crt.3d6a70a3 ...
	I1120 21:10:48.429878  215319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/apiserver.crt.3d6a70a3: {Name:mkcfae9cc43f66e3cf9a5997127280ec140cdb2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:48.430096  215319 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/apiserver.key.3d6a70a3 ...
	I1120 21:10:48.430130  215319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/apiserver.key.3d6a70a3: {Name:mke19264fba2e1ecf4c132bc0912f71b112c201b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:48.430266  215319 certs.go:382] copying /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/apiserver.crt.3d6a70a3 -> /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/apiserver.crt
	I1120 21:10:48.430401  215319 certs.go:386] copying /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/apiserver.key.3d6a70a3 -> /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/apiserver.key
	I1120 21:10:48.430529  215319 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/proxy-client.key
	I1120 21:10:48.430568  215319 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/proxy-client.crt with IP's: []
	I1120 21:10:48.940671  215319 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/proxy-client.crt ...
	I1120 21:10:48.940739  215319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/proxy-client.crt: {Name:mk71966f91c454d889688da2933343c6c48dec89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:48.940932  215319 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/proxy-client.key ...
	I1120 21:10:48.940964  215319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/proxy-client.key: {Name:mke99021ae8c3cb7e2eb27ac89c7511ee24bece4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:48.941210  215319 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/4089.pem (1338 bytes)
	W1120 21:10:48.941273  215319 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-2300/.minikube/certs/4089_empty.pem, impossibly tiny 0 bytes
	I1120 21:10:48.941297  215319 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:10:48.941351  215319 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:10:48.941399  215319 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:10:48.941438  215319 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/key.pem (1675 bytes)
	I1120 21:10:48.941513  215319 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/files/etc/ssl/certs/40892.pem (1708 bytes)
	I1120 21:10:48.942105  215319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:10:48.959701  215319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:10:48.977305  215319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:10:48.995854  215319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 21:10:49.013559  215319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1120 21:10:49.033271  215319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 21:10:49.051611  215319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:10:49.069777  215319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1120 21:10:49.087934  215319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/files/etc/ssl/certs/40892.pem --> /usr/share/ca-certificates/40892.pem (1708 bytes)
	I1120 21:10:49.106276  215319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:10:49.124422  215319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/certs/4089.pem --> /usr/share/ca-certificates/4089.pem (1338 bytes)
	I1120 21:10:49.143075  215319 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:10:49.156866  215319 ssh_runner.go:195] Run: openssl version
	I1120 21:10:49.164193  215319 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/40892.pem
	I1120 21:10:49.173583  215319 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/40892.pem /etc/ssl/certs/40892.pem
	I1120 21:10:49.181808  215319 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/40892.pem
	I1120 21:10:49.186785  215319 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:28 /usr/share/ca-certificates/40892.pem
	I1120 21:10:49.186866  215319 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/40892.pem
	I1120 21:10:49.229204  215319 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:10:49.237249  215319 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/40892.pem /etc/ssl/certs/3ec20f2e.0
	I1120 21:10:49.245475  215319 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:10:49.253641  215319 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:10:49.261689  215319 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:10:49.266210  215319 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:21 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:10:49.266280  215319 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:10:49.310753  215319 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:10:49.318839  215319 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 21:10:49.326973  215319 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4089.pem
	I1120 21:10:49.335207  215319 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4089.pem /etc/ssl/certs/4089.pem
	I1120 21:10:49.343340  215319 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4089.pem
	I1120 21:10:49.347817  215319 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:28 /usr/share/ca-certificates/4089.pem
	I1120 21:10:49.347901  215319 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4089.pem
	I1120 21:10:49.390888  215319 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:10:49.398902  215319 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4089.pem /etc/ssl/certs/51391683.0
	I1120 21:10:49.406620  215319 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:10:49.411818  215319 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 21:10:49.411898  215319 kubeadm.go:401] StartCluster: {Name:embed-certs-121127 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-121127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:10:49.411983  215319 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1120 21:10:49.412061  215319 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:10:49.448788  215319 cri.go:89] found id: ""
	I1120 21:10:49.448889  215319 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:10:49.459289  215319 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 21:10:49.467442  215319 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1120 21:10:49.467529  215319 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 21:10:49.478932  215319 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 21:10:49.478966  215319 kubeadm.go:158] found existing configuration files:
	
	I1120 21:10:49.479017  215319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1120 21:10:49.487902  215319 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 21:10:49.487981  215319 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 21:10:49.495649  215319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1120 21:10:49.506806  215319 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 21:10:49.506873  215319 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 21:10:49.521609  215319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1120 21:10:49.534189  215319 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 21:10:49.534265  215319 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 21:10:49.549708  215319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1120 21:10:49.574323  215319 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 21:10:49.574407  215319 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 21:10:49.598504  215319 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1120 21:10:49.657069  215319 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1120 21:10:49.657607  215319 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 21:10:49.690719  215319 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1120 21:10:49.690818  215319 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1120 21:10:49.690860  215319 kubeadm.go:319] OS: Linux
	I1120 21:10:49.690921  215319 kubeadm.go:319] CGROUPS_CPU: enabled
	I1120 21:10:49.690987  215319 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1120 21:10:49.691053  215319 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1120 21:10:49.691116  215319 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1120 21:10:49.691181  215319 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1120 21:10:49.691246  215319 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1120 21:10:49.691307  215319 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1120 21:10:49.691381  215319 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1120 21:10:49.691442  215319 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1120 21:10:49.795770  215319 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 21:10:49.795897  215319 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 21:10:49.796010  215319 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1120 21:10:49.802148  215319 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 21:10:49.804142  215319 out.go:252]   - Generating certificates and keys ...
	I1120 21:10:49.804262  215319 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 21:10:49.804356  215319 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 21:10:50.453181  215319 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 21:10:51.347078  215319 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 21:10:52.173658  215319 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 21:10:49.762274  213043 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (3.8914586s)
	I1120 21:10:49.762303  213043 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1120 21:10:49.762324  213043 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1120 21:10:49.762373  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1120 21:10:50.243577  213043 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1120 21:10:50.243620  213043 cache_images.go:125] Successfully loaded all cached images
	I1120 21:10:50.243627  213043 cache_images.go:94] duration metric: took 16.252082759s to LoadCachedImages
	I1120 21:10:50.243639  213043 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1120 21:10:50.243738  213043 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-882483 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-882483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:10:50.243810  213043 ssh_runner.go:195] Run: sudo crictl info
	I1120 21:10:50.287246  213043 cni.go:84] Creating CNI manager for ""
	I1120 21:10:50.287270  213043 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 21:10:50.287285  213043 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 21:10:50.287308  213043 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-882483 NodeName:no-preload-882483 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:10:50.287428  213043 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-882483"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:10:50.287508  213043 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:10:50.295696  213043 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1120 21:10:50.295760  213043 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1120 21:10:50.304146  213043 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1120 21:10:50.304246  213043 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1120 21:10:50.305597  213043 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21923-2300/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1120 21:10:50.306023  213043 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21923-2300/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1120 21:10:50.310024  213043 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1120 21:10:50.310058  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1120 21:10:51.235266  213043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:10:51.284371  213043 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1120 21:10:51.295708  213043 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1120 21:10:51.295752  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1120 21:10:51.339481  213043 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1120 21:10:51.361779  213043 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1120 21:10:51.361823  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1120 21:10:52.042136  213043 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 21:10:52.051916  213043 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1120 21:10:52.070654  213043 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:10:52.088633  213043 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1120 21:10:52.103541  213043 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1120 21:10:52.108452  213043 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:10:52.119300  213043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:10:52.275826  213043 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:10:52.301903  213043 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483 for IP: 192.168.76.2
	I1120 21:10:52.301978  213043 certs.go:195] generating shared ca certs ...
	I1120 21:10:52.302013  213043 certs.go:227] acquiring lock for ca certs: {Name:mke329f4cdcc6bfc142b6fc6817600b3d33b3062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:52.302219  213043 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-2300/.minikube/ca.key
	I1120 21:10:52.302301  213043 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-2300/.minikube/proxy-client-ca.key
	I1120 21:10:52.302343  213043 certs.go:257] generating profile certs ...
	I1120 21:10:52.302455  213043 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/client.key
	I1120 21:10:52.302496  213043 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/client.crt with IP's: []
	I1120 21:10:52.475055  213043 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/client.crt ...
	I1120 21:10:52.475158  213043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/client.crt: {Name:mke16c272213fcda87d56ed6709d26dba4d62f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:52.475447  213043 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/client.key ...
	I1120 21:10:52.475489  213043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/client.key: {Name:mk11537008085ba18fd08498bb3cd3d67a88403c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:52.475675  213043 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/apiserver.key.5ea2376b
	I1120 21:10:52.475739  213043 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/apiserver.crt.5ea2376b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1120 21:10:52.953001  213043 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/apiserver.crt.5ea2376b ...
	I1120 21:10:52.953086  213043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/apiserver.crt.5ea2376b: {Name:mkccfcd6f9d0ee0e8ecb43b201b9616e06251f37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:52.953331  213043 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/apiserver.key.5ea2376b ...
	I1120 21:10:52.953393  213043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/apiserver.key.5ea2376b: {Name:mka71b572b1de86324bfa1c51fcf20ebd1fd56e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:52.953549  213043 certs.go:382] copying /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/apiserver.crt.5ea2376b -> /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/apiserver.crt
	I1120 21:10:52.953717  213043 certs.go:386] copying /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/apiserver.key.5ea2376b -> /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/apiserver.key
	I1120 21:10:52.953877  213043 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/proxy-client.key
	I1120 21:10:52.953934  213043 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/proxy-client.crt with IP's: []
	I1120 21:10:54.102166  213043 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/proxy-client.crt ...
	I1120 21:10:54.102275  213043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/proxy-client.crt: {Name:mk3ef8ad5b02a9ed720dd5219a0dec14ba23c27c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:54.102547  213043 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/proxy-client.key ...
	I1120 21:10:54.102613  213043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/proxy-client.key: {Name:mk9ecc41c22dd18731c34c27ecd7ca439520a1a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:54.102982  213043 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/4089.pem (1338 bytes)
	W1120 21:10:54.103084  213043 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-2300/.minikube/certs/4089_empty.pem, impossibly tiny 0 bytes
	I1120 21:10:54.103127  213043 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:10:54.103203  213043 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:10:54.103275  213043 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:10:54.103342  213043 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/key.pem (1675 bytes)
	I1120 21:10:54.103436  213043 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/files/etc/ssl/certs/40892.pem (1708 bytes)
	I1120 21:10:54.104333  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:10:54.139019  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:10:54.158056  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:10:54.178364  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 21:10:54.199024  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1120 21:10:54.219004  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 21:10:54.241468  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:10:54.266858  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:10:54.287775  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:10:54.309214  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/certs/4089.pem --> /usr/share/ca-certificates/4089.pem (1338 bytes)
	I1120 21:10:54.330497  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/files/etc/ssl/certs/40892.pem --> /usr/share/ca-certificates/40892.pem (1708 bytes)
	I1120 21:10:54.350701  213043 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:10:54.376543  213043 ssh_runner.go:195] Run: openssl version
	I1120 21:10:54.386156  213043 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:10:54.395660  213043 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:10:54.405357  213043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:10:54.410953  213043 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:21 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:10:54.411109  213043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:10:54.463743  213043 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:10:54.474887  213043 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 21:10:54.485054  213043 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4089.pem
	I1120 21:10:54.495515  213043 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4089.pem /etc/ssl/certs/4089.pem
	I1120 21:10:54.505920  213043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4089.pem
	I1120 21:10:54.511335  213043 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:28 /usr/share/ca-certificates/4089.pem
	I1120 21:10:54.511475  213043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4089.pem
	I1120 21:10:54.558639  213043 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:10:54.569012  213043 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4089.pem /etc/ssl/certs/51391683.0
	I1120 21:10:54.579698  213043 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/40892.pem
	I1120 21:10:54.589945  213043 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/40892.pem /etc/ssl/certs/40892.pem
	I1120 21:10:54.600319  213043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/40892.pem
	I1120 21:10:54.606020  213043 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:28 /usr/share/ca-certificates/40892.pem
	I1120 21:10:54.606167  213043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/40892.pem
	I1120 21:10:54.652833  213043 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:10:54.663253  213043 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/40892.pem /etc/ssl/certs/3ec20f2e.0
	I1120 21:10:54.674879  213043 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:10:54.680865  213043 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 21:10:54.680998  213043 kubeadm.go:401] StartCluster: {Name:no-preload-882483 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-882483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:10:54.681127  213043 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1120 21:10:54.681238  213043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:10:54.719310  213043 cri.go:89] found id: ""
	I1120 21:10:54.719430  213043 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:10:54.730788  213043 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 21:10:54.740813  213043 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1120 21:10:54.740925  213043 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 21:10:54.762317  213043 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 21:10:54.762487  213043 kubeadm.go:158] found existing configuration files:
	
	I1120 21:10:54.762568  213043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1120 21:10:54.779135  213043 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 21:10:54.779243  213043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 21:10:54.794299  213043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1120 21:10:54.814263  213043 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 21:10:54.814388  213043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 21:10:54.827373  213043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1120 21:10:54.837939  213043 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 21:10:54.838093  213043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 21:10:54.847869  213043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1120 21:10:54.858397  213043 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 21:10:54.858626  213043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 21:10:54.867935  213043 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1120 21:10:54.934595  213043 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1120 21:10:54.935041  213043 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 21:10:54.975377  213043 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1120 21:10:54.975574  213043 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1120 21:10:54.975663  213043 kubeadm.go:319] OS: Linux
	I1120 21:10:54.975732  213043 kubeadm.go:319] CGROUPS_CPU: enabled
	I1120 21:10:54.975807  213043 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1120 21:10:54.975870  213043 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1120 21:10:54.975929  213043 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1120 21:10:54.975984  213043 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1120 21:10:54.976039  213043 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1120 21:10:54.976090  213043 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1120 21:10:54.976148  213043 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1120 21:10:54.976202  213043 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1120 21:10:55.105807  213043 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 21:10:55.106019  213043 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 21:10:55.106174  213043 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1120 21:10:55.124866  213043 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 21:10:52.295500  215319 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 21:10:52.922854  215319 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 21:10:52.923006  215319 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-121127 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1120 21:10:53.618363  215319 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 21:10:53.618710  215319 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-121127 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1120 21:10:54.678656  215319 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 21:10:55.022908  215319 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 21:10:55.930976  215319 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 21:10:55.931059  215319 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 21:10:56.196775  215319 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 21:10:56.256859  215319 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1120 21:10:57.480653  215319 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 21:10:57.882040  215319 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 21:10:59.008366  215319 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 21:10:59.009609  215319 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 21:10:59.012690  215319 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1120 21:10:55.129842  213043 out.go:252]   - Generating certificates and keys ...
	I1120 21:10:55.129953  213043 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 21:10:55.130034  213043 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 21:10:55.703119  213043 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 21:10:57.495554  213043 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 21:10:58.987375  213043 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 21:10:59.018119  215319 out.go:252]   - Booting up control plane ...
	I1120 21:10:59.018239  215319 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1120 21:10:59.019161  215319 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1120 21:10:59.023930  215319 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1120 21:10:59.053554  215319 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1120 21:10:59.053869  215319 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1120 21:10:59.063964  215319 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1120 21:10:59.064275  215319 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1120 21:10:59.064323  215319 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1120 21:10:59.223517  215319 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1120 21:10:59.223653  215319 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1120 21:11:00.233657  215319 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.006765293s
	I1120 21:11:00.234056  215319 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1120 21:11:00.234173  215319 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1120 21:11:00.234269  215319 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1120 21:11:00.234352  215319 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1120 21:11:00.160193  213043 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 21:11:00.487451  213043 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 21:11:00.487606  213043 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-882483] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1120 21:11:01.362818  213043 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 21:11:01.362965  213043 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-882483] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1120 21:11:01.950793  213043 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 21:11:02.234780  213043 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 21:11:02.354738  213043 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 21:11:02.354814  213043 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 21:11:03.294772  213043 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 21:11:03.782757  213043 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1120 21:11:04.193232  213043 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 21:11:04.382767  213043 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 21:11:04.860708  213043 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 21:11:04.860820  213043 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 21:11:04.863738  213043 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1120 21:11:06.422787  215319 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.18440449s
	I1120 21:11:04.867234  213043 out.go:252]   - Booting up control plane ...
	I1120 21:11:04.867345  213043 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1120 21:11:04.867426  213043 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1120 21:11:04.869523  213043 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1120 21:11:04.899399  213043 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1120 21:11:04.899521  213043 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1120 21:11:04.910846  213043 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1120 21:11:04.910950  213043 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1120 21:11:04.910992  213043 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1120 21:11:05.165409  213043 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1120 21:11:05.165535  213043 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1120 21:11:07.666553  213043 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.50132257s
	I1120 21:11:07.673736  213043 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1120 21:11:07.673836  213043 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1120 21:11:07.673929  213043 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1120 21:11:07.674011  213043 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1120 21:11:10.140123  215319 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 9.906080727s
	I1120 21:11:10.236438  215319 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.001234957s
	I1120 21:11:10.277276  215319 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 21:11:10.297843  215319 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 21:11:10.325911  215319 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 21:11:10.326122  215319 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-121127 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 21:11:10.343351  215319 kubeadm.go:319] [bootstrap-token] Using token: g2mfyc.h1z2cs46qltqtwt7
	I1120 21:11:10.346280  215319 out.go:252]   - Configuring RBAC rules ...
	I1120 21:11:10.346395  215319 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 21:11:10.355545  215319 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 21:11:10.371412  215319 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 21:11:10.379803  215319 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 21:11:10.384535  215319 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 21:11:10.389891  215319 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 21:11:10.651084  215319 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 21:11:11.178743  215319 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 21:11:11.643258  215319 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 21:11:11.644866  215319 kubeadm.go:319] 
	I1120 21:11:11.644956  215319 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 21:11:11.644967  215319 kubeadm.go:319] 
	I1120 21:11:11.645048  215319 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 21:11:11.645057  215319 kubeadm.go:319] 
	I1120 21:11:11.645083  215319 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 21:11:11.645148  215319 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 21:11:11.645207  215319 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 21:11:11.645216  215319 kubeadm.go:319] 
	I1120 21:11:11.645273  215319 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 21:11:11.645282  215319 kubeadm.go:319] 
	I1120 21:11:11.645332  215319 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 21:11:11.645340  215319 kubeadm.go:319] 
	I1120 21:11:11.645395  215319 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 21:11:11.645477  215319 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 21:11:11.645552  215319 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 21:11:11.645561  215319 kubeadm.go:319] 
	I1120 21:11:11.645649  215319 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 21:11:11.645732  215319 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 21:11:11.645740  215319 kubeadm.go:319] 
	I1120 21:11:11.645828  215319 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token g2mfyc.h1z2cs46qltqtwt7 \
	I1120 21:11:11.645940  215319 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:0ba1c2e2176682abc636887486248c0692df9e5785bd62c1c27f34717ff2c43f \
	I1120 21:11:11.645968  215319 kubeadm.go:319] 	--control-plane 
	I1120 21:11:11.645976  215319 kubeadm.go:319] 
	I1120 21:11:11.646064  215319 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 21:11:11.646072  215319 kubeadm.go:319] 
	I1120 21:11:11.646158  215319 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token g2mfyc.h1z2cs46qltqtwt7 \
	I1120 21:11:11.646268  215319 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:0ba1c2e2176682abc636887486248c0692df9e5785bd62c1c27f34717ff2c43f 
	I1120 21:11:11.655114  215319 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1120 21:11:11.655359  215319 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1120 21:11:11.655475  215319 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1120 21:11:11.655496  215319 cni.go:84] Creating CNI manager for ""
	I1120 21:11:11.655507  215319 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 21:11:11.659138  215319 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1120 21:11:11.662081  215319 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1120 21:11:11.673041  215319 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1120 21:11:11.673064  215319 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1120 21:11:11.710439  215319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1120 21:11:12.218534  215319 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 21:11:12.218778  215319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:12.218980  215319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-121127 minikube.k8s.io/updated_at=2025_11_20T21_11_12_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=embed-certs-121127 minikube.k8s.io/primary=true
	I1120 21:11:12.731728  215319 ops.go:34] apiserver oom_adj: -16
	I1120 21:11:12.731867  215319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:13.232731  215319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:13.731892  215319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:14.232449  215319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:14.732623  215319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:15.232406  215319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:15.732248  215319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:16.232649  215319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:16.732656  215319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:16.983771  215319 kubeadm.go:1114] duration metric: took 4.765063909s to wait for elevateKubeSystemPrivileges
	I1120 21:11:16.983860  215319 kubeadm.go:403] duration metric: took 27.571965786s to StartCluster
	I1120 21:11:16.983894  215319 settings.go:142] acquiring lock: {Name:mk8f1e96fadc1ef170d5eddc49033a884865c024 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:16.983996  215319 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-2300/kubeconfig
	I1120 21:11:16.985044  215319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/kubeconfig: {Name:mk7ea52a23a4d9fc2da4c68a59479b947db5281c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:16.985390  215319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 21:11:16.985651  215319 config.go:182] Loaded profile config "embed-certs-121127": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 21:11:16.985828  215319 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 21:11:16.985907  215319 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-121127"
	I1120 21:11:16.985927  215319 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-121127"
	I1120 21:11:16.985951  215319 host.go:66] Checking if "embed-certs-121127" exists ...
	I1120 21:11:16.986480  215319 cli_runner.go:164] Run: docker container inspect embed-certs-121127 --format={{.State.Status}}
	I1120 21:11:16.986648  215319 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1120 21:11:16.987026  215319 addons.go:70] Setting default-storageclass=true in profile "embed-certs-121127"
	I1120 21:11:16.987047  215319 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-121127"
	I1120 21:11:16.987307  215319 cli_runner.go:164] Run: docker container inspect embed-certs-121127 --format={{.State.Status}}
	I1120 21:11:16.994548  215319 out.go:179] * Verifying Kubernetes components...
	I1120 21:11:16.997459  215319 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:11:17.032412  215319 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:11:17.035320  215319 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:11:17.035342  215319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 21:11:17.035408  215319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-121127
	I1120 21:11:17.043766  215319 addons.go:239] Setting addon default-storageclass=true in "embed-certs-121127"
	I1120 21:11:17.043811  215319 host.go:66] Checking if "embed-certs-121127" exists ...
	I1120 21:11:17.044255  215319 cli_runner.go:164] Run: docker container inspect embed-certs-121127 --format={{.State.Status}}
	I1120 21:11:17.084660  215319 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 21:11:17.084683  215319 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 21:11:17.084813  215319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-121127
	I1120 21:11:17.092328  215319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/embed-certs-121127/id_rsa Username:docker}
	I1120 21:11:17.116177  215319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/embed-certs-121127/id_rsa Username:docker}
	I1120 21:11:17.534085  215319 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:11:17.534275  215319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 21:11:17.612534  215319 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 21:11:17.775943  215319 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:11:18.900813  215319 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.366494688s)
	I1120 21:11:18.900857  215319 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1120 21:11:18.901467  215319 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.367345174s)
	I1120 21:11:18.903758  215319 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.29117473s)
	I1120 21:11:18.904554  215319 node_ready.go:35] waiting up to 6m0s for node "embed-certs-121127" to be "Ready" ...
	I1120 21:11:19.381801  215319 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.605810992s)
	I1120 21:11:19.384863  215319 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1120 21:11:17.344135  213043 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 9.673723727s
	I1120 21:11:18.482804  213043 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 10.812762535s
	I1120 21:11:20.179642  213043 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 12.508029475s
	I1120 21:11:20.218353  213043 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 21:11:20.236040  213043 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 21:11:20.253784  213043 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 21:11:20.254085  213043 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-882483 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 21:11:20.265837  213043 kubeadm.go:319] [bootstrap-token] Using token: ywj62v.23n6crze3giefwpo
	I1120 21:11:20.268865  213043 out.go:252]   - Configuring RBAC rules ...
	I1120 21:11:20.269002  213043 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 21:11:20.275805  213043 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 21:11:20.284587  213043 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 21:11:20.288932  213043 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 21:11:20.293256  213043 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 21:11:20.299239  213043 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 21:11:20.592688  213043 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 21:11:21.061072  213043 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 21:11:21.593533  213043 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 21:11:21.594862  213043 kubeadm.go:319] 
	I1120 21:11:21.594940  213043 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 21:11:21.594950  213043 kubeadm.go:319] 
	I1120 21:11:21.595034  213043 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 21:11:21.595041  213043 kubeadm.go:319] 
	I1120 21:11:21.595068  213043 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 21:11:21.595133  213043 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 21:11:21.595189  213043 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 21:11:21.595196  213043 kubeadm.go:319] 
	I1120 21:11:21.595253  213043 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 21:11:21.595261  213043 kubeadm.go:319] 
	I1120 21:11:21.595311  213043 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 21:11:21.595319  213043 kubeadm.go:319] 
	I1120 21:11:21.595373  213043 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 21:11:21.595455  213043 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 21:11:21.595530  213043 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 21:11:21.595539  213043 kubeadm.go:319] 
	I1120 21:11:21.595628  213043 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 21:11:21.595712  213043 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 21:11:21.595720  213043 kubeadm.go:319] 
	I1120 21:11:21.595815  213043 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ywj62v.23n6crze3giefwpo \
	I1120 21:11:21.595926  213043 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:0ba1c2e2176682abc636887486248c0692df9e5785bd62c1c27f34717ff2c43f \
	I1120 21:11:21.595951  213043 kubeadm.go:319] 	--control-plane 
	I1120 21:11:21.595959  213043 kubeadm.go:319] 
	I1120 21:11:21.596048  213043 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 21:11:21.596056  213043 kubeadm.go:319] 
	I1120 21:11:21.596141  213043 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ywj62v.23n6crze3giefwpo \
	I1120 21:11:21.596251  213043 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:0ba1c2e2176682abc636887486248c0692df9e5785bd62c1c27f34717ff2c43f 
	I1120 21:11:21.599732  213043 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1120 21:11:21.599979  213043 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1120 21:11:21.600091  213043 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1120 21:11:21.600143  213043 cni.go:84] Creating CNI manager for ""
	I1120 21:11:21.600153  213043 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 21:11:21.603419  213043 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1120 21:11:19.387776  215319 addons.go:515] duration metric: took 2.401930065s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1120 21:11:19.406917  215319 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-121127" context rescaled to 1 replicas
	W1120 21:11:20.907507  215319 node_ready.go:57] node "embed-certs-121127" has "Ready":"False" status (will retry)
	I1120 21:11:21.606304  213043 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1120 21:11:21.614557  213043 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1120 21:11:21.614580  213043 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1120 21:11:21.634259  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1120 21:11:21.982875  213043 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 21:11:21.983009  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:21.983088  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-882483 minikube.k8s.io/updated_at=2025_11_20T21_11_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=no-preload-882483 minikube.k8s.io/primary=true
	I1120 21:11:22.007465  213043 ops.go:34] apiserver oom_adj: -16
	I1120 21:11:22.157159  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:22.657268  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:23.157803  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:23.657236  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:24.157477  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:24.657860  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:25.157411  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:25.657702  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:26.157843  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:26.285482  213043 kubeadm.go:1114] duration metric: took 4.302523321s to wait for elevateKubeSystemPrivileges
	I1120 21:11:26.285513  213043 kubeadm.go:403] duration metric: took 31.604531804s to StartCluster
	I1120 21:11:26.285531  213043 settings.go:142] acquiring lock: {Name:mk8f1e96fadc1ef170d5eddc49033a884865c024 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:26.285593  213043 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-2300/kubeconfig
	I1120 21:11:26.287164  213043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/kubeconfig: {Name:mk7ea52a23a4d9fc2da4c68a59479b947db5281c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:26.287414  213043 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1120 21:11:26.287556  213043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 21:11:26.287944  213043 config.go:182] Loaded profile config "no-preload-882483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 21:11:26.287991  213043 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 21:11:26.288058  213043 addons.go:70] Setting storage-provisioner=true in profile "no-preload-882483"
	I1120 21:11:26.288076  213043 addons.go:239] Setting addon storage-provisioner=true in "no-preload-882483"
	I1120 21:11:26.288097  213043 host.go:66] Checking if "no-preload-882483" exists ...
	I1120 21:11:26.288521  213043 addons.go:70] Setting default-storageclass=true in profile "no-preload-882483"
	I1120 21:11:26.288541  213043 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-882483"
	I1120 21:11:26.288783  213043 cli_runner.go:164] Run: docker container inspect no-preload-882483 --format={{.State.Status}}
	I1120 21:11:26.289073  213043 cli_runner.go:164] Run: docker container inspect no-preload-882483 --format={{.State.Status}}
	I1120 21:11:26.290586  213043 out.go:179] * Verifying Kubernetes components...
	I1120 21:11:26.293477  213043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:11:26.323920  213043 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1120 21:11:23.414189  215319 node_ready.go:57] node "embed-certs-121127" has "Ready":"False" status (will retry)
	W1120 21:11:25.414872  215319 node_ready.go:57] node "embed-certs-121127" has "Ready":"False" status (will retry)
	I1120 21:11:26.327978  213043 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:11:26.328003  213043 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 21:11:26.328079  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:11:26.336255  213043 addons.go:239] Setting addon default-storageclass=true in "no-preload-882483"
	I1120 21:11:26.336296  213043 host.go:66] Checking if "no-preload-882483" exists ...
	I1120 21:11:26.336717  213043 cli_runner.go:164] Run: docker container inspect no-preload-882483 --format={{.State.Status}}
	I1120 21:11:26.360671  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/no-preload-882483/id_rsa Username:docker}
	I1120 21:11:26.376397  213043 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 21:11:26.376418  213043 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 21:11:26.376479  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:11:26.400988  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/no-preload-882483/id_rsa Username:docker}
	I1120 21:11:26.687850  213043 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 21:11:26.690324  213043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 21:11:26.690505  213043 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:11:26.734149  213043 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:11:27.397240  213043 node_ready.go:35] waiting up to 6m0s for node "no-preload-882483" to be "Ready" ...
	I1120 21:11:27.397539  213043 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1120 21:11:27.679069  213043 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1120 21:11:27.682766  213043 addons.go:515] duration metric: took 1.394759028s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1120 21:11:27.905580  213043 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-882483" context rescaled to 1 replicas
	W1120 21:11:29.421800  213043 node_ready.go:57] node "no-preload-882483" has "Ready":"False" status (will retry)
	W1120 21:11:27.908189  215319 node_ready.go:57] node "embed-certs-121127" has "Ready":"False" status (will retry)
	W1120 21:11:30.412440  215319 node_ready.go:57] node "embed-certs-121127" has "Ready":"False" status (will retry)
	W1120 21:11:31.900213  213043 node_ready.go:57] node "no-preload-882483" has "Ready":"False" status (will retry)
	W1120 21:11:33.901668  213043 node_ready.go:57] node "no-preload-882483" has "Ready":"False" status (will retry)
	W1120 21:11:32.907272  215319 node_ready.go:57] node "embed-certs-121127" has "Ready":"False" status (will retry)
	W1120 21:11:34.908040  215319 node_ready.go:57] node "embed-certs-121127" has "Ready":"False" status (will retry)
	W1120 21:11:36.400656  213043 node_ready.go:57] node "no-preload-882483" has "Ready":"False" status (will retry)
	W1120 21:11:38.900383  213043 node_ready.go:57] node "no-preload-882483" has "Ready":"False" status (will retry)
	W1120 21:11:37.413240  215319 node_ready.go:57] node "embed-certs-121127" has "Ready":"False" status (will retry)
	W1120 21:11:39.413709  215319 node_ready.go:57] node "embed-certs-121127" has "Ready":"False" status (will retry)
	W1120 21:11:41.907357  215319 node_ready.go:57] node "embed-certs-121127" has "Ready":"False" status (will retry)
	I1120 21:11:39.906331  213043 node_ready.go:49] node "no-preload-882483" is "Ready"
	I1120 21:11:39.906365  213043 node_ready.go:38] duration metric: took 12.509091525s for node "no-preload-882483" to be "Ready" ...
	I1120 21:11:39.906384  213043 api_server.go:52] waiting for apiserver process to appear ...
	I1120 21:11:39.906485  213043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:11:39.929872  213043 api_server.go:72] duration metric: took 13.642420091s to wait for apiserver process to appear ...
	I1120 21:11:39.929897  213043 api_server.go:88] waiting for apiserver healthz status ...
	I1120 21:11:39.929916  213043 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1120 21:11:39.959040  213043 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1120 21:11:39.960371  213043 api_server.go:141] control plane version: v1.34.1
	I1120 21:11:39.960395  213043 api_server.go:131] duration metric: took 30.49151ms to wait for apiserver health ...
	I1120 21:11:39.960406  213043 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:11:39.968917  213043 system_pods.go:59] 8 kube-system pods found
	I1120 21:11:39.968949  213043 system_pods.go:61] "coredns-66bc5c9577-kbl4d" [7e90701b-e158-4e32-b311-ef635af8eec0] Pending
	I1120 21:11:39.968956  213043 system_pods.go:61] "etcd-no-preload-882483" [2cc20a64-e788-4bd3-99ae-22f5e88054c6] Running
	I1120 21:11:39.968960  213043 system_pods.go:61] "kindnet-jr57n" [43754294-4619-410a-9cf0-01baa9df142e] Running
	I1120 21:11:39.968964  213043 system_pods.go:61] "kube-apiserver-no-preload-882483" [e355b3e2-5d70-4e01-b443-2e5c756584db] Running
	I1120 21:11:39.968969  213043 system_pods.go:61] "kube-controller-manager-no-preload-882483" [b7f3c12d-8398-4a69-8399-1c21d54c7624] Running
	I1120 21:11:39.968974  213043 system_pods.go:61] "kube-proxy-n9cg7" [77a3defc-bd58-414c-9c2a-bf750429a720] Running
	I1120 21:11:39.968980  213043 system_pods.go:61] "kube-scheduler-no-preload-882483" [40685019-a9a4-4a4a-9e42-58d2a9f39f9b] Running
	I1120 21:11:39.968993  213043 system_pods.go:61] "storage-provisioner" [1698ab50-608c-439f-b6de-81323e57d2c8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:11:39.969000  213043 system_pods.go:74] duration metric: took 8.588323ms to wait for pod list to return data ...
	I1120 21:11:39.969014  213043 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:11:39.974128  213043 default_sa.go:45] found service account: "default"
	I1120 21:11:39.974153  213043 default_sa.go:55] duration metric: took 5.133696ms for default service account to be created ...
	I1120 21:11:39.974162  213043 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 21:11:39.980371  213043 system_pods.go:86] 8 kube-system pods found
	I1120 21:11:39.980456  213043 system_pods.go:89] "coredns-66bc5c9577-kbl4d" [7e90701b-e158-4e32-b311-ef635af8eec0] Pending
	I1120 21:11:39.980476  213043 system_pods.go:89] "etcd-no-preload-882483" [2cc20a64-e788-4bd3-99ae-22f5e88054c6] Running
	I1120 21:11:39.980495  213043 system_pods.go:89] "kindnet-jr57n" [43754294-4619-410a-9cf0-01baa9df142e] Running
	I1120 21:11:39.980534  213043 system_pods.go:89] "kube-apiserver-no-preload-882483" [e355b3e2-5d70-4e01-b443-2e5c756584db] Running
	I1120 21:11:39.980557  213043 system_pods.go:89] "kube-controller-manager-no-preload-882483" [b7f3c12d-8398-4a69-8399-1c21d54c7624] Running
	I1120 21:11:39.980576  213043 system_pods.go:89] "kube-proxy-n9cg7" [77a3defc-bd58-414c-9c2a-bf750429a720] Running
	I1120 21:11:39.980616  213043 system_pods.go:89] "kube-scheduler-no-preload-882483" [40685019-a9a4-4a4a-9e42-58d2a9f39f9b] Running
	I1120 21:11:39.980644  213043 system_pods.go:89] "storage-provisioner" [1698ab50-608c-439f-b6de-81323e57d2c8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:11:39.980689  213043 retry.go:31] will retry after 211.631364ms: missing components: kube-dns
	I1120 21:11:40.197643  213043 system_pods.go:86] 8 kube-system pods found
	I1120 21:11:40.197678  213043 system_pods.go:89] "coredns-66bc5c9577-kbl4d" [7e90701b-e158-4e32-b311-ef635af8eec0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:11:40.197685  213043 system_pods.go:89] "etcd-no-preload-882483" [2cc20a64-e788-4bd3-99ae-22f5e88054c6] Running
	I1120 21:11:40.197692  213043 system_pods.go:89] "kindnet-jr57n" [43754294-4619-410a-9cf0-01baa9df142e] Running
	I1120 21:11:40.197697  213043 system_pods.go:89] "kube-apiserver-no-preload-882483" [e355b3e2-5d70-4e01-b443-2e5c756584db] Running
	I1120 21:11:40.197715  213043 system_pods.go:89] "kube-controller-manager-no-preload-882483" [b7f3c12d-8398-4a69-8399-1c21d54c7624] Running
	I1120 21:11:40.197721  213043 system_pods.go:89] "kube-proxy-n9cg7" [77a3defc-bd58-414c-9c2a-bf750429a720] Running
	I1120 21:11:40.197725  213043 system_pods.go:89] "kube-scheduler-no-preload-882483" [40685019-a9a4-4a4a-9e42-58d2a9f39f9b] Running
	I1120 21:11:40.197732  213043 system_pods.go:89] "storage-provisioner" [1698ab50-608c-439f-b6de-81323e57d2c8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:11:40.197751  213043 retry.go:31] will retry after 377.800802ms: missing components: kube-dns
	I1120 21:11:40.593257  213043 system_pods.go:86] 8 kube-system pods found
	I1120 21:11:40.593306  213043 system_pods.go:89] "coredns-66bc5c9577-kbl4d" [7e90701b-e158-4e32-b311-ef635af8eec0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:11:40.593313  213043 system_pods.go:89] "etcd-no-preload-882483" [2cc20a64-e788-4bd3-99ae-22f5e88054c6] Running
	I1120 21:11:40.593319  213043 system_pods.go:89] "kindnet-jr57n" [43754294-4619-410a-9cf0-01baa9df142e] Running
	I1120 21:11:40.593324  213043 system_pods.go:89] "kube-apiserver-no-preload-882483" [e355b3e2-5d70-4e01-b443-2e5c756584db] Running
	I1120 21:11:40.593329  213043 system_pods.go:89] "kube-controller-manager-no-preload-882483" [b7f3c12d-8398-4a69-8399-1c21d54c7624] Running
	I1120 21:11:40.593333  213043 system_pods.go:89] "kube-proxy-n9cg7" [77a3defc-bd58-414c-9c2a-bf750429a720] Running
	I1120 21:11:40.593338  213043 system_pods.go:89] "kube-scheduler-no-preload-882483" [40685019-a9a4-4a4a-9e42-58d2a9f39f9b] Running
	I1120 21:11:40.593344  213043 system_pods.go:89] "storage-provisioner" [1698ab50-608c-439f-b6de-81323e57d2c8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:11:40.593365  213043 retry.go:31] will retry after 415.468389ms: missing components: kube-dns
	I1120 21:11:41.014146  213043 system_pods.go:86] 8 kube-system pods found
	I1120 21:11:41.014181  213043 system_pods.go:89] "coredns-66bc5c9577-kbl4d" [7e90701b-e158-4e32-b311-ef635af8eec0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:11:41.014188  213043 system_pods.go:89] "etcd-no-preload-882483" [2cc20a64-e788-4bd3-99ae-22f5e88054c6] Running
	I1120 21:11:41.014194  213043 system_pods.go:89] "kindnet-jr57n" [43754294-4619-410a-9cf0-01baa9df142e] Running
	I1120 21:11:41.014204  213043 system_pods.go:89] "kube-apiserver-no-preload-882483" [e355b3e2-5d70-4e01-b443-2e5c756584db] Running
	I1120 21:11:41.014210  213043 system_pods.go:89] "kube-controller-manager-no-preload-882483" [b7f3c12d-8398-4a69-8399-1c21d54c7624] Running
	I1120 21:11:41.014214  213043 system_pods.go:89] "kube-proxy-n9cg7" [77a3defc-bd58-414c-9c2a-bf750429a720] Running
	I1120 21:11:41.014219  213043 system_pods.go:89] "kube-scheduler-no-preload-882483" [40685019-a9a4-4a4a-9e42-58d2a9f39f9b] Running
	I1120 21:11:41.014230  213043 system_pods.go:89] "storage-provisioner" [1698ab50-608c-439f-b6de-81323e57d2c8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:11:41.014248  213043 retry.go:31] will retry after 444.646673ms: missing components: kube-dns
	I1120 21:11:41.463176  213043 system_pods.go:86] 8 kube-system pods found
	I1120 21:11:41.463210  213043 system_pods.go:89] "coredns-66bc5c9577-kbl4d" [7e90701b-e158-4e32-b311-ef635af8eec0] Running
	I1120 21:11:41.463217  213043 system_pods.go:89] "etcd-no-preload-882483" [2cc20a64-e788-4bd3-99ae-22f5e88054c6] Running
	I1120 21:11:41.463222  213043 system_pods.go:89] "kindnet-jr57n" [43754294-4619-410a-9cf0-01baa9df142e] Running
	I1120 21:11:41.463231  213043 system_pods.go:89] "kube-apiserver-no-preload-882483" [e355b3e2-5d70-4e01-b443-2e5c756584db] Running
	I1120 21:11:41.463236  213043 system_pods.go:89] "kube-controller-manager-no-preload-882483" [b7f3c12d-8398-4a69-8399-1c21d54c7624] Running
	I1120 21:11:41.463241  213043 system_pods.go:89] "kube-proxy-n9cg7" [77a3defc-bd58-414c-9c2a-bf750429a720] Running
	I1120 21:11:41.463245  213043 system_pods.go:89] "kube-scheduler-no-preload-882483" [40685019-a9a4-4a4a-9e42-58d2a9f39f9b] Running
	I1120 21:11:41.463248  213043 system_pods.go:89] "storage-provisioner" [1698ab50-608c-439f-b6de-81323e57d2c8] Running
	I1120 21:11:41.463256  213043 system_pods.go:126] duration metric: took 1.489088106s to wait for k8s-apps to be running ...
	I1120 21:11:41.463815  213043 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:11:41.463897  213043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:11:41.478805  213043 system_svc.go:56] duration metric: took 15.533244ms WaitForService to wait for kubelet
	I1120 21:11:41.478831  213043 kubeadm.go:587] duration metric: took 15.191384736s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:11:41.478850  213043 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:11:41.481776  213043 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:11:41.481816  213043 node_conditions.go:123] node cpu capacity is 2
	I1120 21:11:41.481829  213043 node_conditions.go:105] duration metric: took 2.972812ms to run NodePressure ...
	I1120 21:11:41.481842  213043 start.go:242] waiting for startup goroutines ...
	I1120 21:11:41.481854  213043 start.go:247] waiting for cluster config update ...
	I1120 21:11:41.481868  213043 start.go:256] writing updated cluster config ...
	I1120 21:11:41.482192  213043 ssh_runner.go:195] Run: rm -f paused
	I1120 21:11:41.487575  213043 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:11:41.491383  213043 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kbl4d" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:11:41.496770  213043 pod_ready.go:94] pod "coredns-66bc5c9577-kbl4d" is "Ready"
	I1120 21:11:41.496795  213043 pod_ready.go:86] duration metric: took 5.33932ms for pod "coredns-66bc5c9577-kbl4d" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:11:41.499177  213043 pod_ready.go:83] waiting for pod "etcd-no-preload-882483" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:11:41.504084  213043 pod_ready.go:94] pod "etcd-no-preload-882483" is "Ready"
	I1120 21:11:41.504111  213043 pod_ready.go:86] duration metric: took 4.906837ms for pod "etcd-no-preload-882483" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:11:41.506620  213043 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-882483" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:11:41.511433  213043 pod_ready.go:94] pod "kube-apiserver-no-preload-882483" is "Ready"
	I1120 21:11:41.511459  213043 pod_ready.go:86] duration metric: took 4.811968ms for pod "kube-apiserver-no-preload-882483" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:11:41.513936  213043 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-882483" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:11:41.892092  213043 pod_ready.go:94] pod "kube-controller-manager-no-preload-882483" is "Ready"
	I1120 21:11:41.892121  213043 pod_ready.go:86] duration metric: took 378.162369ms for pod "kube-controller-manager-no-preload-882483" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:11:42.092905  213043 pod_ready.go:83] waiting for pod "kube-proxy-n9cg7" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:11:42.491797  213043 pod_ready.go:94] pod "kube-proxy-n9cg7" is "Ready"
	I1120 21:11:42.491827  213043 pod_ready.go:86] duration metric: took 398.890514ms for pod "kube-proxy-n9cg7" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:11:42.692310  213043 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-882483" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:11:43.091974  213043 pod_ready.go:94] pod "kube-scheduler-no-preload-882483" is "Ready"
	I1120 21:11:43.092006  213043 pod_ready.go:86] duration metric: took 399.653499ms for pod "kube-scheduler-no-preload-882483" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:11:43.092019  213043 pod_ready.go:40] duration metric: took 1.604411611s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:11:43.166546  213043 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1120 21:11:43.168303  213043 out.go:179] * Done! kubectl is now configured to use "no-preload-882483" cluster and "default" namespace by default
	W1120 21:11:43.907939  215319 node_ready.go:57] node "embed-certs-121127" has "Ready":"False" status (will retry)
	W1120 21:11:45.908969  215319 node_ready.go:57] node "embed-certs-121127" has "Ready":"False" status (will retry)
	W1120 21:11:48.411380  215319 node_ready.go:57] node "embed-certs-121127" has "Ready":"False" status (will retry)
	W1120 21:11:50.908333  215319 node_ready.go:57] node "embed-certs-121127" has "Ready":"False" status (will retry)
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3332ff430ea2e       1611cd07b61d5       7 seconds ago       Running             busybox                   0                   35e5c6836ebe6       busybox                                     default
	8ba1448c5208e       138784d87c9c5       13 seconds ago      Running             coredns                   0                   ac398ec19623a       coredns-66bc5c9577-kbl4d                    kube-system
	ac1d82c386c0d       66749159455b3       13 seconds ago      Running             storage-provisioner       0                   53d0cc536e5d7       storage-provisioner                         kube-system
	6ae51b7304d17       b1a8c6f707935       24 seconds ago      Running             kindnet-cni               0                   f8231ead75fef       kindnet-jr57n                               kube-system
	f3a97777b67b3       05baa95f5142d       26 seconds ago      Running             kube-proxy                0                   a042a50bc5134       kube-proxy-n9cg7                            kube-system
	a08eea19da810       b5f57ec6b9867       45 seconds ago      Running             kube-scheduler            0                   3f085b80662ac       kube-scheduler-no-preload-882483            kube-system
	1d98f7949f8d4       7eb2c6ff0c5a7       45 seconds ago      Running             kube-controller-manager   0                   dff427621068f       kube-controller-manager-no-preload-882483   kube-system
	680c56dfb4909       a1894772a478e       45 seconds ago      Running             etcd                      0                   fd0e4ce635277       etcd-no-preload-882483                      kube-system
	ff050fee197a6       43911e833d64d       45 seconds ago      Running             kube-apiserver            0                   1498cf02d9a01       kube-apiserver-no-preload-882483            kube-system
	
	
	==> containerd <==
	Nov 20 21:11:40 no-preload-882483 containerd[756]: time="2025-11-20T21:11:40.384941202Z" level=info msg="connecting to shim ac1d82c386c0dd060910eebb103d9a8ec94f7a984f33fd70ccfd4c5757297c5a" address="unix:///run/containerd/s/837b1dad384941854c9301a2342d31b629c980d2071399c4d6de8f94aafa53cc" protocol=ttrpc version=3
	Nov 20 21:11:40 no-preload-882483 containerd[756]: time="2025-11-20T21:11:40.428360748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kbl4d,Uid:7e90701b-e158-4e32-b311-ef635af8eec0,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac398ec19623a98630af63b6d40ace684172f279e130305031a3d4df61854159\""
	Nov 20 21:11:40 no-preload-882483 containerd[756]: time="2025-11-20T21:11:40.437483530Z" level=info msg="CreateContainer within sandbox \"ac398ec19623a98630af63b6d40ace684172f279e130305031a3d4df61854159\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 20 21:11:40 no-preload-882483 containerd[756]: time="2025-11-20T21:11:40.445141776Z" level=info msg="Container 8ba1448c5208ed35c1cd3fe0d2684eedb395466127060c252bfb7797b5f59ca6: CDI devices from CRI Config.CDIDevices: []"
	Nov 20 21:11:40 no-preload-882483 containerd[756]: time="2025-11-20T21:11:40.454312673Z" level=info msg="CreateContainer within sandbox \"ac398ec19623a98630af63b6d40ace684172f279e130305031a3d4df61854159\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8ba1448c5208ed35c1cd3fe0d2684eedb395466127060c252bfb7797b5f59ca6\""
	Nov 20 21:11:40 no-preload-882483 containerd[756]: time="2025-11-20T21:11:40.457151379Z" level=info msg="StartContainer for \"8ba1448c5208ed35c1cd3fe0d2684eedb395466127060c252bfb7797b5f59ca6\""
	Nov 20 21:11:40 no-preload-882483 containerd[756]: time="2025-11-20T21:11:40.458258656Z" level=info msg="connecting to shim 8ba1448c5208ed35c1cd3fe0d2684eedb395466127060c252bfb7797b5f59ca6" address="unix:///run/containerd/s/a9dcb32955697df086aec876cf871ee9606522f219c92fe39f95dfe16e76ad0a" protocol=ttrpc version=3
	Nov 20 21:11:40 no-preload-882483 containerd[756]: time="2025-11-20T21:11:40.509002165Z" level=info msg="StartContainer for \"ac1d82c386c0dd060910eebb103d9a8ec94f7a984f33fd70ccfd4c5757297c5a\" returns successfully"
	Nov 20 21:11:40 no-preload-882483 containerd[756]: time="2025-11-20T21:11:40.573710061Z" level=info msg="StartContainer for \"8ba1448c5208ed35c1cd3fe0d2684eedb395466127060c252bfb7797b5f59ca6\" returns successfully"
	Nov 20 21:11:43 no-preload-882483 containerd[756]: time="2025-11-20T21:11:43.731959628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:3914dd0d-f188-4b9a-8dd2-72c422726597,Namespace:default,Attempt:0,}"
	Nov 20 21:11:43 no-preload-882483 containerd[756]: time="2025-11-20T21:11:43.777589429Z" level=info msg="connecting to shim 35e5c6836ebe678994871e00fef5b39f4cbff589ce84acd3dd8c33e0591d6623" address="unix:///run/containerd/s/a3dec73619f134ef466677d948be5f86a32022b64e0677e95eb805ce4b1efab8" namespace=k8s.io protocol=ttrpc version=3
	Nov 20 21:11:43 no-preload-882483 containerd[756]: time="2025-11-20T21:11:43.847369614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:3914dd0d-f188-4b9a-8dd2-72c422726597,Namespace:default,Attempt:0,} returns sandbox id \"35e5c6836ebe678994871e00fef5b39f4cbff589ce84acd3dd8c33e0591d6623\""
	Nov 20 21:11:43 no-preload-882483 containerd[756]: time="2025-11-20T21:11:43.851209084Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 20 21:11:45 no-preload-882483 containerd[756]: time="2025-11-20T21:11:45.991991240Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 21:11:45 no-preload-882483 containerd[756]: time="2025-11-20T21:11:45.992969604Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937185"
	Nov 20 21:11:45 no-preload-882483 containerd[756]: time="2025-11-20T21:11:45.994150713Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 21:11:45 no-preload-882483 containerd[756]: time="2025-11-20T21:11:45.996926139Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 21:11:45 no-preload-882483 containerd[756]: time="2025-11-20T21:11:45.997590340Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.146332147s"
	Nov 20 21:11:45 no-preload-882483 containerd[756]: time="2025-11-20T21:11:45.997703696Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 20 21:11:46 no-preload-882483 containerd[756]: time="2025-11-20T21:11:46.004171332Z" level=info msg="CreateContainer within sandbox \"35e5c6836ebe678994871e00fef5b39f4cbff589ce84acd3dd8c33e0591d6623\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 20 21:11:46 no-preload-882483 containerd[756]: time="2025-11-20T21:11:46.011437777Z" level=info msg="Container 3332ff430ea2e3074b03bf83a07d619422acf6aa526c1d23eec70772cb72f175: CDI devices from CRI Config.CDIDevices: []"
	Nov 20 21:11:46 no-preload-882483 containerd[756]: time="2025-11-20T21:11:46.034617132Z" level=info msg="CreateContainer within sandbox \"35e5c6836ebe678994871e00fef5b39f4cbff589ce84acd3dd8c33e0591d6623\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"3332ff430ea2e3074b03bf83a07d619422acf6aa526c1d23eec70772cb72f175\""
	Nov 20 21:11:46 no-preload-882483 containerd[756]: time="2025-11-20T21:11:46.037995900Z" level=info msg="StartContainer for \"3332ff430ea2e3074b03bf83a07d619422acf6aa526c1d23eec70772cb72f175\""
	Nov 20 21:11:46 no-preload-882483 containerd[756]: time="2025-11-20T21:11:46.040460921Z" level=info msg="connecting to shim 3332ff430ea2e3074b03bf83a07d619422acf6aa526c1d23eec70772cb72f175" address="unix:///run/containerd/s/a3dec73619f134ef466677d948be5f86a32022b64e0677e95eb805ce4b1efab8" protocol=ttrpc version=3
	Nov 20 21:11:46 no-preload-882483 containerd[756]: time="2025-11-20T21:11:46.134322525Z" level=info msg="StartContainer for \"3332ff430ea2e3074b03bf83a07d619422acf6aa526c1d23eec70772cb72f175\" returns successfully"
	
	
	==> coredns [8ba1448c5208ed35c1cd3fe0d2684eedb395466127060c252bfb7797b5f59ca6] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43155 - 29135 "HINFO IN 7457519116205105251.4743579555540716560. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020670895s
	
	
	==> describe nodes <==
	Name:               no-preload-882483
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-882483
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=no-preload-882483
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_11_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:11:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-882483
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:11:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:11:51 +0000   Thu, 20 Nov 2025 21:11:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:11:51 +0000   Thu, 20 Nov 2025 21:11:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:11:51 +0000   Thu, 20 Nov 2025 21:11:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:11:51 +0000   Thu, 20 Nov 2025 21:11:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-882483
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                3ba0793b-34b4-41b6-b5b2-549aaf1b0ffc
	  Boot ID:                    0cc3a06a-788d-45d4-8fff-2131330a9ee0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-kbl4d                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-no-preload-882483                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-jr57n                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-no-preload-882483             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-no-preload-882483    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-n9cg7                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-no-preload-882483             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 26s                kube-proxy       
	  Warning  CgroupV1                 47s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  47s (x8 over 47s)  kubelet          Node no-preload-882483 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet          Node no-preload-882483 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     47s (x7 over 47s)  kubelet          Node no-preload-882483 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  47s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 34s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 34s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  33s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  33s                kubelet          Node no-preload-882483 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    33s                kubelet          Node no-preload-882483 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     33s                kubelet          Node no-preload-882483 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           29s                node-controller  Node no-preload-882483 event: Registered Node no-preload-882483 in Controller
	  Normal   NodeReady                15s                kubelet          Node no-preload-882483 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov20 20:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014399] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.498138] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.765613] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.782554] kauditd_printk_skb: 36 callbacks suppressed
	[Nov20 20:40] hrtimer: interrupt took 1888672 ns
	
	
	==> etcd [680c56dfb49094937510bccdd6d9cde462c90e51b1a41a672c85e8160ca93ca1] <==
	{"level":"warn","ts":"2025-11-20T21:11:15.081553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.102776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.138030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.213432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.249425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.296059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.411545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.458729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.502767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.543291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.581865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.707633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.722537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.773289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.784361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.844891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.940032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.945816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.992857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:16.049315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:16.074985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:16.105411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:16.163176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:16.256769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:16.545560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48002","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:11:54 up 54 min,  0 user,  load average: 3.54, 3.31, 2.85
	Linux no-preload-882483 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6ae51b7304d17b3fff94baae871c8f4a1af4bacafee33361038b139626b00d12] <==
	I1120 21:11:29.528699       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 21:11:29.619151       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1120 21:11:29.619361       1 main.go:148] setting mtu 1500 for CNI 
	I1120 21:11:29.619384       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 21:11:29.619401       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T21:11:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 21:11:29.824136       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 21:11:29.824329       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 21:11:29.919286       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 21:11:29.919579       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1120 21:11:30.419490       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 21:11:30.419752       1 metrics.go:72] Registering metrics
	I1120 21:11:30.420061       1 controller.go:711] "Syncing nftables rules"
	I1120 21:11:39.829242       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 21:11:39.829307       1 main.go:301] handling current node
	I1120 21:11:49.822527       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 21:11:49.822564       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ff050fee197a6920f29180dac6d4c1f8f4db987e76a6f6cfff1a6c0a017071ec] <==
	I1120 21:11:18.552091       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1120 21:11:18.552247       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1120 21:11:18.584221       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:11:18.584533       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1120 21:11:18.646891       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:11:18.647080       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 21:11:18.756889       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:11:18.986459       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1120 21:11:19.001848       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1120 21:11:19.002060       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 21:11:19.986978       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 21:11:20.067821       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 21:11:20.174391       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1120 21:11:20.198325       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1120 21:11:20.199816       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 21:11:20.212826       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 21:11:20.402005       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 21:11:21.009116       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 21:11:21.059363       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1120 21:11:21.077319       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 21:11:25.608435       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:11:25.615335       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:11:25.906768       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 21:11:26.506951       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1120 21:11:52.620352       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:55244: use of closed network connection
	
	
	==> kube-controller-manager [1d98f7949f8d446666313ef7b81cfc3ced91f03248f87f9d0926e5d14a16e359] <==
	I1120 21:11:25.451324       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1120 21:11:25.451620       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1120 21:11:25.452132       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1120 21:11:25.452754       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-882483" podCIDRs=["10.244.0.0/24"]
	I1120 21:11:25.453120       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 21:11:25.450101       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1120 21:11:25.453354       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1120 21:11:25.453458       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-882483"
	I1120 21:11:25.453535       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1120 21:11:25.450121       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1120 21:11:25.450148       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1120 21:11:25.454300       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1120 21:11:25.454355       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1120 21:11:25.456072       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1120 21:11:25.457799       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1120 21:11:25.461759       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:11:25.463915       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:11:25.495447       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1120 21:11:25.498797       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1120 21:11:25.498968       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 21:11:25.501127       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 21:11:25.501138       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 21:11:25.501559       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 21:11:25.506226       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:11:40.456478       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [f3a97777b67b3798e8d95e3392bdd5d7980ea5e430bce8a928d0f4efe5223a57] <==
	I1120 21:11:27.529305       1 server_linux.go:53] "Using iptables proxy"
	I1120 21:11:27.612514       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 21:11:27.714867       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 21:11:27.714913       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1120 21:11:27.715115       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 21:11:27.734126       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:11:27.734186       1 server_linux.go:132] "Using iptables Proxier"
	I1120 21:11:27.738138       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 21:11:27.738726       1 server.go:527] "Version info" version="v1.34.1"
	I1120 21:11:27.738752       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:11:27.742093       1 config.go:106] "Starting endpoint slice config controller"
	I1120 21:11:27.742297       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 21:11:27.742797       1 config.go:200] "Starting service config controller"
	I1120 21:11:27.743094       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 21:11:27.744889       1 config.go:309] "Starting node config controller"
	I1120 21:11:27.744909       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 21:11:27.744917       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 21:11:27.745442       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 21:11:27.745460       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 21:11:27.842792       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 21:11:27.843928       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 21:11:27.845519       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [a08eea19da810b62a0878817a24a10b282995d1596e74e0b8a2c3bb031d8d573] <==
	E1120 21:11:18.501327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 21:11:18.501731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 21:11:18.501799       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 21:11:18.501873       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 21:11:18.501916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 21:11:18.501943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 21:11:18.502837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 21:11:18.502888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 21:11:18.502945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 21:11:18.502996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 21:11:18.503044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 21:11:18.503073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 21:11:19.339579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 21:11:19.394144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 21:11:19.422823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 21:11:19.471286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 21:11:19.564102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 21:11:19.587560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 21:11:19.623227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 21:11:19.631289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 21:11:19.649584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 21:11:19.660223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1120 21:11:19.713140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 21:11:19.720651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1120 21:11:21.734725       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 21:11:22 no-preload-882483 kubelet[2190]: I1120 21:11:22.099887    2190 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-no-preload-882483"
	Nov 20 21:11:22 no-preload-882483 kubelet[2190]: E1120 21:11:22.120445    2190 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-no-preload-882483\" already exists" pod="kube-system/kube-scheduler-no-preload-882483"
	Nov 20 21:11:22 no-preload-882483 kubelet[2190]: I1120 21:11:22.136821    2190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-882483" podStartSLOduration=1.136799669 podStartE2EDuration="1.136799669s" podCreationTimestamp="2025-11-20 21:11:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:11:22.121610073 +0000 UTC m=+1.250319770" watchObservedRunningTime="2025-11-20 21:11:22.136799669 +0000 UTC m=+1.265509358"
	Nov 20 21:11:22 no-preload-882483 kubelet[2190]: I1120 21:11:22.162094    2190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-882483" podStartSLOduration=1.162074788 podStartE2EDuration="1.162074788s" podCreationTimestamp="2025-11-20 21:11:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:11:22.138047388 +0000 UTC m=+1.266757069" watchObservedRunningTime="2025-11-20 21:11:22.162074788 +0000 UTC m=+1.290784469"
	Nov 20 21:11:25 no-preload-882483 kubelet[2190]: I1120 21:11:25.541540    2190 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 20 21:11:25 no-preload-882483 kubelet[2190]: I1120 21:11:25.543047    2190 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 20 21:11:26 no-preload-882483 kubelet[2190]: I1120 21:11:26.632427    2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77a3defc-bd58-414c-9c2a-bf750429a720-xtables-lock\") pod \"kube-proxy-n9cg7\" (UID: \"77a3defc-bd58-414c-9c2a-bf750429a720\") " pod="kube-system/kube-proxy-n9cg7"
	Nov 20 21:11:26 no-preload-882483 kubelet[2190]: I1120 21:11:26.632483    2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43754294-4619-410a-9cf0-01baa9df142e-xtables-lock\") pod \"kindnet-jr57n\" (UID: \"43754294-4619-410a-9cf0-01baa9df142e\") " pod="kube-system/kindnet-jr57n"
	Nov 20 21:11:26 no-preload-882483 kubelet[2190]: I1120 21:11:26.632508    2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/77a3defc-bd58-414c-9c2a-bf750429a720-kube-proxy\") pod \"kube-proxy-n9cg7\" (UID: \"77a3defc-bd58-414c-9c2a-bf750429a720\") " pod="kube-system/kube-proxy-n9cg7"
	Nov 20 21:11:26 no-preload-882483 kubelet[2190]: I1120 21:11:26.632539    2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77a3defc-bd58-414c-9c2a-bf750429a720-lib-modules\") pod \"kube-proxy-n9cg7\" (UID: \"77a3defc-bd58-414c-9c2a-bf750429a720\") " pod="kube-system/kube-proxy-n9cg7"
	Nov 20 21:11:26 no-preload-882483 kubelet[2190]: I1120 21:11:26.632558    2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrfnb\" (UniqueName: \"kubernetes.io/projected/77a3defc-bd58-414c-9c2a-bf750429a720-kube-api-access-hrfnb\") pod \"kube-proxy-n9cg7\" (UID: \"77a3defc-bd58-414c-9c2a-bf750429a720\") " pod="kube-system/kube-proxy-n9cg7"
	Nov 20 21:11:26 no-preload-882483 kubelet[2190]: I1120 21:11:26.632657    2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/43754294-4619-410a-9cf0-01baa9df142e-cni-cfg\") pod \"kindnet-jr57n\" (UID: \"43754294-4619-410a-9cf0-01baa9df142e\") " pod="kube-system/kindnet-jr57n"
	Nov 20 21:11:26 no-preload-882483 kubelet[2190]: I1120 21:11:26.632714    2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43754294-4619-410a-9cf0-01baa9df142e-lib-modules\") pod \"kindnet-jr57n\" (UID: \"43754294-4619-410a-9cf0-01baa9df142e\") " pod="kube-system/kindnet-jr57n"
	Nov 20 21:11:26 no-preload-882483 kubelet[2190]: I1120 21:11:26.632730    2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74jgk\" (UniqueName: \"kubernetes.io/projected/43754294-4619-410a-9cf0-01baa9df142e-kube-api-access-74jgk\") pod \"kindnet-jr57n\" (UID: \"43754294-4619-410a-9cf0-01baa9df142e\") " pod="kube-system/kindnet-jr57n"
	Nov 20 21:11:26 no-preload-882483 kubelet[2190]: I1120 21:11:26.768382    2190 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 20 21:11:28 no-preload-882483 kubelet[2190]: I1120 21:11:28.149267    2190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n9cg7" podStartSLOduration=2.149223963 podStartE2EDuration="2.149223963s" podCreationTimestamp="2025-11-20 21:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:11:28.148883182 +0000 UTC m=+7.277592871" watchObservedRunningTime="2025-11-20 21:11:28.149223963 +0000 UTC m=+7.277933644"
	Nov 20 21:11:39 no-preload-882483 kubelet[2190]: I1120 21:11:39.887023    2190 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 20 21:11:39 no-preload-882483 kubelet[2190]: I1120 21:11:39.923091    2190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-jr57n" podStartSLOduration=11.916044724 podStartE2EDuration="13.923073445s" podCreationTimestamp="2025-11-20 21:11:26 +0000 UTC" firstStartedPulling="2025-11-20 21:11:27.259885095 +0000 UTC m=+6.388594776" lastFinishedPulling="2025-11-20 21:11:29.266913816 +0000 UTC m=+8.395623497" observedRunningTime="2025-11-20 21:11:30.165306592 +0000 UTC m=+9.294016273" watchObservedRunningTime="2025-11-20 21:11:39.923073445 +0000 UTC m=+19.051783126"
	Nov 20 21:11:39 no-preload-882483 kubelet[2190]: I1120 21:11:39.940547    2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1698ab50-608c-439f-b6de-81323e57d2c8-tmp\") pod \"storage-provisioner\" (UID: \"1698ab50-608c-439f-b6de-81323e57d2c8\") " pod="kube-system/storage-provisioner"
	Nov 20 21:11:39 no-preload-882483 kubelet[2190]: I1120 21:11:39.940625    2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz66s\" (UniqueName: \"kubernetes.io/projected/1698ab50-608c-439f-b6de-81323e57d2c8-kube-api-access-pz66s\") pod \"storage-provisioner\" (UID: \"1698ab50-608c-439f-b6de-81323e57d2c8\") " pod="kube-system/storage-provisioner"
	Nov 20 21:11:40 no-preload-882483 kubelet[2190]: I1120 21:11:40.043771    2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e90701b-e158-4e32-b311-ef635af8eec0-config-volume\") pod \"coredns-66bc5c9577-kbl4d\" (UID: \"7e90701b-e158-4e32-b311-ef635af8eec0\") " pod="kube-system/coredns-66bc5c9577-kbl4d"
	Nov 20 21:11:40 no-preload-882483 kubelet[2190]: I1120 21:11:40.044031    2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rws28\" (UniqueName: \"kubernetes.io/projected/7e90701b-e158-4e32-b311-ef635af8eec0-kube-api-access-rws28\") pod \"coredns-66bc5c9577-kbl4d\" (UID: \"7e90701b-e158-4e32-b311-ef635af8eec0\") " pod="kube-system/coredns-66bc5c9577-kbl4d"
	Nov 20 21:11:41 no-preload-882483 kubelet[2190]: I1120 21:11:41.211383    2190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-kbl4d" podStartSLOduration=15.211364053 podStartE2EDuration="15.211364053s" podCreationTimestamp="2025-11-20 21:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:11:41.197445025 +0000 UTC m=+20.326154722" watchObservedRunningTime="2025-11-20 21:11:41.211364053 +0000 UTC m=+20.340073742"
	Nov 20 21:11:41 no-preload-882483 kubelet[2190]: I1120 21:11:41.211533    2190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.211526599 podStartE2EDuration="14.211526599s" podCreationTimestamp="2025-11-20 21:11:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:11:41.210866599 +0000 UTC m=+20.339576305" watchObservedRunningTime="2025-11-20 21:11:41.211526599 +0000 UTC m=+20.340236288"
	Nov 20 21:11:43 no-preload-882483 kubelet[2190]: I1120 21:11:43.471451    2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffprn\" (UniqueName: \"kubernetes.io/projected/3914dd0d-f188-4b9a-8dd2-72c422726597-kube-api-access-ffprn\") pod \"busybox\" (UID: \"3914dd0d-f188-4b9a-8dd2-72c422726597\") " pod="default/busybox"
	
	
	==> storage-provisioner [ac1d82c386c0dd060910eebb103d9a8ec94f7a984f33fd70ccfd4c5757297c5a] <==
	I1120 21:11:40.504988       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1120 21:11:40.627581       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 21:11:40.627655       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1120 21:11:40.631587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:40.653487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 21:11:40.654833       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 21:11:40.655198       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-882483_7ab555a1-1d59-44be-9ed9-d3982c29f190!
	I1120 21:11:40.656970       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d6e59f04-25e3-468b-be2a-acd42c0d8ce9", APIVersion:"v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-882483_7ab555a1-1d59-44be-9ed9-d3982c29f190 became leader
	W1120 21:11:40.660976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:40.668888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 21:11:40.756331       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-882483_7ab555a1-1d59-44be-9ed9-d3982c29f190!
	W1120 21:11:42.672417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:42.677961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:44.687452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:44.693167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:46.696595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:46.702149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:48.705360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:48.710790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:50.714207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:50.721668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:52.726721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:52.734927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-882483 -n no-preload-882483
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-882483 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-882483
helpers_test.go:243: (dbg) docker inspect no-preload-882483:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1f0d2ad1dcb3759e146761e2c8578d4459fc993b4a4b3bc9532b7630f4912bdf",
	        "Created": "2025-11-20T21:10:26.010385926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213588,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:10:26.107513525Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/1f0d2ad1dcb3759e146761e2c8578d4459fc993b4a4b3bc9532b7630f4912bdf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1f0d2ad1dcb3759e146761e2c8578d4459fc993b4a4b3bc9532b7630f4912bdf/hostname",
	        "HostsPath": "/var/lib/docker/containers/1f0d2ad1dcb3759e146761e2c8578d4459fc993b4a4b3bc9532b7630f4912bdf/hosts",
	        "LogPath": "/var/lib/docker/containers/1f0d2ad1dcb3759e146761e2c8578d4459fc993b4a4b3bc9532b7630f4912bdf/1f0d2ad1dcb3759e146761e2c8578d4459fc993b4a4b3bc9532b7630f4912bdf-json.log",
	        "Name": "/no-preload-882483",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-882483:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-882483",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1f0d2ad1dcb3759e146761e2c8578d4459fc993b4a4b3bc9532b7630f4912bdf",
	                "LowerDir": "/var/lib/docker/overlay2/081a7dcffb05310abec02624633cefedd83b62ed44013ab8180d55a713ef8131-init/diff:/var/lib/docker/overlay2/5105da773b59b243b777c3c083d206b6a741bd11ebc5a0283799917fe36ebbb2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/081a7dcffb05310abec02624633cefedd83b62ed44013ab8180d55a713ef8131/merged",
	                "UpperDir": "/var/lib/docker/overlay2/081a7dcffb05310abec02624633cefedd83b62ed44013ab8180d55a713ef8131/diff",
	                "WorkDir": "/var/lib/docker/overlay2/081a7dcffb05310abec02624633cefedd83b62ed44013ab8180d55a713ef8131/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-882483",
	                "Source": "/var/lib/docker/volumes/no-preload-882483/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-882483",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-882483",
	                "name.minikube.sigs.k8s.io": "no-preload-882483",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1bd69978a310b14ac63f2b35c7992b572003e94bc8ce123766c440495956e954",
	            "SandboxKey": "/var/run/docker/netns/1bd69978a310",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-882483": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:56:79:89:22:4f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3914a1636d2aee4a414b22a4dfd645a85cd3facf5fdd8976d88ddaba212b7449",
	                    "EndpointID": "f1965385d05f51203f4933be889fcb940887b37e8f2fafef1a66e58a82c8086c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-882483",
	                        "1f0d2ad1dcb3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-882483 -n no-preload-882483
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-882483 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-882483 logs -n 25: (1.234192112s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-448616 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-982573                                                                                                                                                                                                                        │ kubernetes-upgrade-982573 │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │ 20 Nov 25 21:06 UTC │
	│ delete  │ -p cilium-448616                                                                                                                                                                                                                                    │ cilium-448616             │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │ 20 Nov 25 21:06 UTC │
	│ start   │ -p force-systemd-env-444240 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-444240  │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │ 20 Nov 25 21:07 UTC │
	│ start   │ -p cert-expiration-339813 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-339813    │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │ 20 Nov 25 21:07 UTC │
	│ ssh     │ force-systemd-env-444240 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-444240  │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ delete  │ -p force-systemd-env-444240                                                                                                                                                                                                                         │ force-systemd-env-444240  │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ start   │ -p cert-options-530158 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-530158       │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ ssh     │ cert-options-530158 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-530158       │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ ssh     │ -p cert-options-530158 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-530158       │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ delete  │ -p cert-options-530158                                                                                                                                                                                                                              │ cert-options-530158       │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ start   │ -p old-k8s-version-023521 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-023521    │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:08 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-023521 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-023521    │ jenkins │ v1.37.0 │ 20 Nov 25 21:09 UTC │ 20 Nov 25 21:09 UTC │
	│ stop    │ -p old-k8s-version-023521 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-023521    │ jenkins │ v1.37.0 │ 20 Nov 25 21:09 UTC │ 20 Nov 25 21:09 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-023521 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-023521    │ jenkins │ v1.37.0 │ 20 Nov 25 21:09 UTC │ 20 Nov 25 21:09 UTC │
	│ start   │ -p old-k8s-version-023521 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-023521    │ jenkins │ v1.37.0 │ 20 Nov 25 21:09 UTC │ 20 Nov 25 21:10 UTC │
	│ start   │ -p cert-expiration-339813 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-339813    │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ delete  │ -p cert-expiration-339813                                                                                                                                                                                                                           │ cert-expiration-339813    │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ image   │ old-k8s-version-023521 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-023521    │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ start   │ -p no-preload-882483 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-882483         │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:11 UTC │
	│ pause   │ -p old-k8s-version-023521 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-023521    │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ unpause │ -p old-k8s-version-023521 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-023521    │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ delete  │ -p old-k8s-version-023521                                                                                                                                                                                                                           │ old-k8s-version-023521    │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ delete  │ -p old-k8s-version-023521                                                                                                                                                                                                                           │ old-k8s-version-023521    │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ start   │ -p embed-certs-121127 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-121127        │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:10:32
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:10:32.272234  215319 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:10:32.272427  215319 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:10:32.272454  215319 out.go:374] Setting ErrFile to fd 2...
	I1120 21:10:32.272477  215319 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:10:32.272794  215319 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-2300/.minikube/bin
	I1120 21:10:32.273270  215319 out.go:368] Setting JSON to false
	I1120 21:10:32.274261  215319 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3182,"bootTime":1763669851,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1120 21:10:32.274365  215319 start.go:143] virtualization:  
	I1120 21:10:32.278249  215319 out.go:179] * [embed-certs-121127] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 21:10:32.281536  215319 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:10:32.281590  215319 notify.go:221] Checking for updates...
	I1120 21:10:32.285312  215319 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:10:32.288516  215319 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-2300/kubeconfig
	I1120 21:10:32.291528  215319 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-2300/.minikube
	I1120 21:10:32.294538  215319 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 21:10:32.297547  215319 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:10:32.301075  215319 config.go:182] Loaded profile config "no-preload-882483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 21:10:32.301249  215319 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:10:32.333529  215319 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 21:10:32.333649  215319 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:10:32.417912  215319 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2025-11-20 21:10:32.407575159 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:10:32.418017  215319 docker.go:319] overlay module found
	I1120 21:10:32.421353  215319 out.go:179] * Using the docker driver based on user configuration
	I1120 21:10:32.424325  215319 start.go:309] selected driver: docker
	I1120 21:10:32.424349  215319 start.go:930] validating driver "docker" against <nil>
	I1120 21:10:32.424363  215319 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:10:32.425059  215319 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:10:32.513107  215319 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2025-11-20 21:10:32.502549025 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:10:32.513274  215319 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 21:10:32.513509  215319 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:10:32.516584  215319 out.go:179] * Using Docker driver with root privileges
	I1120 21:10:32.519406  215319 cni.go:84] Creating CNI manager for ""
	I1120 21:10:32.519478  215319 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 21:10:32.519490  215319 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1120 21:10:32.519579  215319 start.go:353] cluster config:
	{Name:embed-certs-121127 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-121127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:10:32.522717  215319 out.go:179] * Starting "embed-certs-121127" primary control-plane node in "embed-certs-121127" cluster
	I1120 21:10:32.525448  215319 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1120 21:10:32.528424  215319 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:10:32.531154  215319 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1120 21:10:32.531209  215319 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-2300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1120 21:10:32.531219  215319 cache.go:65] Caching tarball of preloaded images
	I1120 21:10:32.531306  215319 preload.go:238] Found /home/jenkins/minikube-integration/21923-2300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1120 21:10:32.531323  215319 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1120 21:10:32.531409  215319 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:10:32.531695  215319 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/config.json ...
	I1120 21:10:32.531724  215319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/config.json: {Name:mkf1caef776ab7651062c2e535c2c88870c5e983 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:32.569179  215319 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:10:32.569206  215319 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:10:32.569221  215319 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:10:32.569244  215319 start.go:360] acquireMachinesLock for embed-certs-121127: {Name:mk01ab0b00d92a3a57a2470bc1735436b9279226 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:10:32.569377  215319 start.go:364] duration metric: took 93.942µs to acquireMachinesLock for "embed-certs-121127"
	I1120 21:10:32.569411  215319 start.go:93] Provisioning new machine with config: &{Name:embed-certs-121127 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-121127 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1120 21:10:32.569485  215319 start.go:125] createHost starting for "" (driver="docker")
	I1120 21:10:30.823943  213043 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-882483
	
	I1120 21:10:30.823965  213043 ubuntu.go:182] provisioning hostname "no-preload-882483"
	I1120 21:10:30.824029  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:10:30.849688  213043 main.go:143] libmachine: Using SSH client type: native
	I1120 21:10:30.849989  213043 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1120 21:10:30.850000  213043 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-882483 && echo "no-preload-882483" | sudo tee /etc/hostname
	I1120 21:10:31.045383  213043 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-882483
	
	I1120 21:10:31.045498  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:10:31.066529  213043 main.go:143] libmachine: Using SSH client type: native
	I1120 21:10:31.066869  213043 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1120 21:10:31.066887  213043 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-882483' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-882483/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-882483' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:10:31.230853  213043 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:10:31.230882  213043 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-2300/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-2300/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-2300/.minikube}
	I1120 21:10:31.230914  213043 ubuntu.go:190] setting up certificates
	I1120 21:10:31.230923  213043 provision.go:84] configureAuth start
	I1120 21:10:31.230989  213043 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-882483
	I1120 21:10:31.268164  213043 provision.go:143] copyHostCerts
	I1120 21:10:31.268386  213043 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-2300/.minikube/ca.pem, removing ...
	I1120 21:10:31.268398  213043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-2300/.minikube/ca.pem
	I1120 21:10:31.268536  213043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-2300/.minikube/ca.pem (1078 bytes)
	I1120 21:10:31.268750  213043 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-2300/.minikube/cert.pem, removing ...
	I1120 21:10:31.268760  213043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-2300/.minikube/cert.pem
	I1120 21:10:31.268797  213043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-2300/.minikube/cert.pem (1123 bytes)
	I1120 21:10:31.268939  213043 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-2300/.minikube/key.pem, removing ...
	I1120 21:10:31.268949  213043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-2300/.minikube/key.pem
	I1120 21:10:31.269019  213043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-2300/.minikube/key.pem (1675 bytes)
	I1120 21:10:31.269128  213043 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-2300/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca-key.pem org=jenkins.no-preload-882483 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-882483]
	I1120 21:10:31.500851  213043 provision.go:177] copyRemoteCerts
	I1120 21:10:31.500966  213043 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:10:31.501044  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:10:31.526060  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/no-preload-882483/id_rsa Username:docker}
	I1120 21:10:31.631846  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:10:31.664345  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1120 21:10:31.689418  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 21:10:31.712693  213043 provision.go:87] duration metric: took 481.750269ms to configureAuth
	I1120 21:10:31.712717  213043 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:10:31.712893  213043 config.go:182] Loaded profile config "no-preload-882483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 21:10:31.712900  213043 machine.go:97] duration metric: took 4.087197444s to provisionDockerMachine
	I1120 21:10:31.712907  213043 client.go:176] duration metric: took 6.936840004s to LocalClient.Create
	I1120 21:10:31.712921  213043 start.go:167] duration metric: took 6.936944408s to libmachine.API.Create "no-preload-882483"
	I1120 21:10:31.712928  213043 start.go:293] postStartSetup for "no-preload-882483" (driver="docker")
	I1120 21:10:31.712936  213043 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:10:31.712989  213043 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:10:31.713029  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:10:31.733700  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/no-preload-882483/id_rsa Username:docker}
	I1120 21:10:31.878326  213043 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:10:31.883660  213043 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:10:31.883690  213043 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:10:31.883701  213043 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-2300/.minikube/addons for local assets ...
	I1120 21:10:31.883762  213043 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-2300/.minikube/files for local assets ...
	I1120 21:10:31.883840  213043 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-2300/.minikube/files/etc/ssl/certs/40892.pem -> 40892.pem in /etc/ssl/certs
	I1120 21:10:31.883944  213043 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:10:31.892942  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/files/etc/ssl/certs/40892.pem --> /etc/ssl/certs/40892.pem (1708 bytes)
	I1120 21:10:31.917654  213043 start.go:296] duration metric: took 204.712086ms for postStartSetup
	I1120 21:10:31.918000  213043 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-882483
	I1120 21:10:31.936123  213043 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/config.json ...
	I1120 21:10:31.936389  213043 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:10:31.936443  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:10:31.967982  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/no-preload-882483/id_rsa Username:docker}
	I1120 21:10:32.078669  213043 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:10:32.085996  213043 start.go:128] duration metric: took 7.314437317s to createHost
	I1120 21:10:32.086027  213043 start.go:83] releasing machines lock for "no-preload-882483", held for 7.314572515s
	I1120 21:10:32.086102  213043 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-882483
	I1120 21:10:32.106405  213043 ssh_runner.go:195] Run: cat /version.json
	I1120 21:10:32.106490  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:10:32.106537  213043 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:10:32.106624  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:10:32.142946  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/no-preload-882483/id_rsa Username:docker}
	I1120 21:10:32.159643  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/no-preload-882483/id_rsa Username:docker}
	I1120 21:10:32.355216  213043 ssh_runner.go:195] Run: systemctl --version
	I1120 21:10:32.362321  213043 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:10:32.370092  213043 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:10:32.370164  213043 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:10:32.401817  213043 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1120 21:10:32.401849  213043 start.go:496] detecting cgroup driver to use...
	I1120 21:10:32.401884  213043 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:10:32.401938  213043 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1120 21:10:32.422916  213043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1120 21:10:32.437483  213043 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:10:32.437558  213043 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:10:32.459930  213043 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:10:32.493465  213043 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:10:32.662678  213043 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:10:32.843704  213043 docker.go:234] disabling docker service ...
	I1120 21:10:32.843789  213043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:10:32.876917  213043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:10:32.895625  213043 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:10:33.089053  213043 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:10:33.275927  213043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:10:33.293194  213043 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:10:33.338648  213043 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1120 21:10:33.355936  213043 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1120 21:10:33.368759  213043 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1120 21:10:33.368844  213043 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1120 21:10:33.381875  213043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1120 21:10:33.395432  213043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1120 21:10:33.408098  213043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1120 21:10:33.427665  213043 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:10:33.446128  213043 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1120 21:10:33.466290  213043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1120 21:10:33.483959  213043 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1120 21:10:33.499683  213043 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:10:33.510700  213043 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:10:33.518581  213043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:10:33.709896  213043 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1120 21:10:33.822015  213043 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1120 21:10:33.822134  213043 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1120 21:10:33.826846  213043 start.go:564] Will wait 60s for crictl version
	I1120 21:10:33.826910  213043 ssh_runner.go:195] Run: which crictl
	I1120 21:10:33.831427  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:10:33.871229  213043 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1120 21:10:33.871292  213043 ssh_runner.go:195] Run: containerd --version
	I1120 21:10:33.891748  213043 ssh_runner.go:195] Run: containerd --version
	I1120 21:10:33.916967  213043 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1120 21:10:33.919885  213043 cli_runner.go:164] Run: docker network inspect no-preload-882483 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:10:33.936366  213043 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1120 21:10:33.945164  213043 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:10:33.956120  213043 kubeadm.go:884] updating cluster {Name:no-preload-882483 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-882483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:10:33.956230  213043 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1120 21:10:33.956282  213043 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:10:33.991508  213043 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1120 21:10:33.991530  213043 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1120 21:10:33.991565  213043 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:10:33.991798  213043 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1120 21:10:33.991887  213043 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 21:10:33.991969  213043 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1120 21:10:33.992048  213043 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1120 21:10:33.992124  213043 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1120 21:10:33.992204  213043 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1120 21:10:33.992287  213043 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1120 21:10:33.995739  213043 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1120 21:10:33.995864  213043 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 21:10:33.995919  213043 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1120 21:10:33.995969  213043 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:10:33.996007  213043 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1120 21:10:33.996052  213043 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1120 21:10:33.996098  213043 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1120 21:10:33.996136  213043 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1120 21:10:34.232651  213043 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1120 21:10:34.232766  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1120 21:10:34.242693  213043 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc"
	I1120 21:10:34.242790  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1120 21:10:34.243280  213043 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196"
	I1120 21:10:34.243352  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1120 21:10:34.250011  213043 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9"
	I1120 21:10:34.250142  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1120 21:10:34.250374  213043 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.4-0" and sha "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e"
	I1120 21:10:34.250534  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.4-0
	I1120 21:10:34.252139  213043 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a"
	I1120 21:10:34.252252  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 21:10:34.252157  213043 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0"
	I1120 21:10:34.252385  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1120 21:10:34.282502  213043 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1120 21:10:34.282609  213043 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1120 21:10:34.282670  213043 ssh_runner.go:195] Run: which crictl
	I1120 21:10:34.364616  213043 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1120 21:10:34.364808  213043 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1120 21:10:34.364896  213043 ssh_runner.go:195] Run: which crictl
	I1120 21:10:34.364714  213043 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1120 21:10:34.365013  213043 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1120 21:10:34.365042  213043 ssh_runner.go:195] Run: which crictl
	I1120 21:10:34.369043  213043 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1120 21:10:34.369083  213043 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1120 21:10:34.369145  213043 ssh_runner.go:195] Run: which crictl
	I1120 21:10:34.369202  213043 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1120 21:10:34.369215  213043 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1120 21:10:34.369234  213043 ssh_runner.go:195] Run: which crictl
	I1120 21:10:34.369556  213043 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1120 21:10:34.369581  213043 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1120 21:10:34.369613  213043 ssh_runner.go:195] Run: which crictl
	I1120 21:10:34.369661  213043 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1120 21:10:34.369674  213043 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 21:10:34.369693  213043 ssh_runner.go:195] Run: which crictl
	I1120 21:10:34.369764  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1120 21:10:34.395800  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1120 21:10:34.395860  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1120 21:10:34.395895  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1120 21:10:34.395942  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	W1120 21:10:34.396545  213043 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1120 21:10:34.396588  213043 retry.go:31] will retry after 175.374248ms: ssh: rejected: connect failed (open failed)
	I1120 21:10:34.418642  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1120 21:10:34.418788  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:10:34.418988  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1120 21:10:34.419068  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:10:34.419464  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 21:10:34.419549  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:10:32.573090  215319 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1120 21:10:32.573348  215319 start.go:159] libmachine.API.Create for "embed-certs-121127" (driver="docker")
	I1120 21:10:32.573382  215319 client.go:173] LocalClient.Create starting
	I1120 21:10:32.573462  215319 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem
	I1120 21:10:32.573519  215319 main.go:143] libmachine: Decoding PEM data...
	I1120 21:10:32.573540  215319 main.go:143] libmachine: Parsing certificate...
	I1120 21:10:32.573598  215319 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-2300/.minikube/certs/cert.pem
	I1120 21:10:32.573625  215319 main.go:143] libmachine: Decoding PEM data...
	I1120 21:10:32.573638  215319 main.go:143] libmachine: Parsing certificate...
	I1120 21:10:32.574005  215319 cli_runner.go:164] Run: docker network inspect embed-certs-121127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1120 21:10:32.593761  215319 cli_runner.go:211] docker network inspect embed-certs-121127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1120 21:10:32.593849  215319 network_create.go:284] running [docker network inspect embed-certs-121127] to gather additional debugging logs...
	I1120 21:10:32.593866  215319 cli_runner.go:164] Run: docker network inspect embed-certs-121127
	W1120 21:10:32.613669  215319 cli_runner.go:211] docker network inspect embed-certs-121127 returned with exit code 1
	I1120 21:10:32.613700  215319 network_create.go:287] error running [docker network inspect embed-certs-121127]: docker network inspect embed-certs-121127: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-121127 not found
	I1120 21:10:32.613712  215319 network_create.go:289] output of [docker network inspect embed-certs-121127]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-121127 not found
	
	** /stderr **
	I1120 21:10:32.613817  215319 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:10:32.635649  215319 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8f2399b7fac6 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:ce:e1:0f:d8:b1} reservation:<nil>}
	I1120 21:10:32.636010  215319 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-954bfb8e5d57 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:f3:60:ee:cc:b7} reservation:<nil>}
	I1120 21:10:32.636319  215319 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-02e4726a397e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:f0:04:c7:8f:fa} reservation:<nil>}
	I1120 21:10:32.636566  215319 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3914a1636d2a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:72:79:08:e4:0c:17} reservation:<nil>}
	I1120 21:10:32.636944  215319 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019ec6e0}
	I1120 21:10:32.636974  215319 network_create.go:124] attempt to create docker network embed-certs-121127 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1120 21:10:32.637039  215319 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-121127 embed-certs-121127
	I1120 21:10:32.714508  215319 network_create.go:108] docker network embed-certs-121127 192.168.85.0/24 created
	I1120 21:10:32.714543  215319 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-121127" container
	I1120 21:10:32.714637  215319 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1120 21:10:32.732682  215319 cli_runner.go:164] Run: docker volume create embed-certs-121127 --label name.minikube.sigs.k8s.io=embed-certs-121127 --label created_by.minikube.sigs.k8s.io=true
	I1120 21:10:32.756507  215319 oci.go:103] Successfully created a docker volume embed-certs-121127
	I1120 21:10:32.756594  215319 cli_runner.go:164] Run: docker run --rm --name embed-certs-121127-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-121127 --entrypoint /usr/bin/test -v embed-certs-121127:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1120 21:10:33.387177  215319 oci.go:107] Successfully prepared a docker volume embed-certs-121127
	I1120 21:10:33.387250  215319 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1120 21:10:33.387259  215319 kic.go:194] Starting extracting preloaded images to volume ...
	I1120 21:10:33.387331  215319 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-2300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-121127:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1120 21:10:34.459326  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/no-preload-882483/id_rsa Username:docker}
	I1120 21:10:34.459312  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/no-preload-882483/id_rsa Username:docker}
	I1120 21:10:34.469757  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/no-preload-882483/id_rsa Username:docker}
	I1120 21:10:34.543428  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1120 21:10:34.543515  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:10:34.572614  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/no-preload-882483/id_rsa Username:docker}
	I1120 21:10:34.596526  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1120 21:10:34.596616  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1120 21:10:34.768263  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1120 21:10:34.768390  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1120 21:10:34.768474  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1120 21:10:34.768555  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1120 21:10:34.913213  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1120 21:10:35.009057  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 21:10:35.044003  213043 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1120 21:10:35.044151  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1120 21:10:35.044203  213043 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1120 21:10:35.044500  213043 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1120 21:10:35.044227  213043 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1120 21:10:35.044565  213043 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1120 21:10:35.044283  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1120 21:10:35.044661  213043 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1120 21:10:35.090404  213043 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1120 21:10:35.090540  213043 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1120 21:10:35.134742  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1120 21:10:35.151636  213043 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1120 21:10:35.151682  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1120 21:10:35.151825  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1120 21:10:35.151904  213043 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1120 21:10:35.151933  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	W1120 21:10:35.170164  213043 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1120 21:10:35.170361  213043 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1120 21:10:35.170495  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:10:35.302893  213043 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1120 21:10:35.302939  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1120 21:10:35.303014  213043 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1120 21:10:35.303099  213043 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1120 21:10:35.303158  213043 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1120 21:10:35.303175  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1120 21:10:35.303244  213043 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1120 21:10:35.303299  213043 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1120 21:10:35.303359  213043 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1120 21:10:35.303420  213043 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	W1120 21:10:35.318458  213043 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1120 21:10:35.318494  213043 retry.go:31] will retry after 320.092741ms: ssh: rejected: connect failed (open failed)
	I1120 21:10:35.402542  213043 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1120 21:10:35.402627  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1120 21:10:35.402675  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:10:35.433428  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/no-preload-882483/id_rsa Username:docker}
	I1120 21:10:35.434774  213043 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1120 21:10:35.434811  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1120 21:10:35.434868  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:10:35.435061  213043 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1120 21:10:35.435084  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1120 21:10:35.435128  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:10:35.436184  213043 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1120 21:10:35.436246  213043 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:10:35.436293  213043 ssh_runner.go:195] Run: which crictl
	I1120 21:10:35.436364  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:10:35.496688  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/no-preload-882483/id_rsa Username:docker}
	I1120 21:10:35.502403  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/no-preload-882483/id_rsa Username:docker}
	I1120 21:10:35.522014  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/no-preload-882483/id_rsa Username:docker}
	I1120 21:10:35.999069  213043 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1120 21:10:35.999137  213043 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1120 21:10:35.999169  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1120 21:10:35.999284  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:10:36.372697  213043 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1120 21:10:36.372773  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1120 21:10:36.384259  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:10:39.441482  215319 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-2300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-121127:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (6.054103754s)
	I1120 21:10:39.441517  215319 kic.go:203] duration metric: took 6.054253885s to extract preloaded images to volume ...
	W1120 21:10:39.441650  215319 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1120 21:10:39.441778  215319 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1120 21:10:39.518129  215319 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-121127 --name embed-certs-121127 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-121127 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-121127 --network embed-certs-121127 --ip 192.168.85.2 --volume embed-certs-121127:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1120 21:10:39.885329  215319 cli_runner.go:164] Run: docker container inspect embed-certs-121127 --format={{.State.Running}}
	I1120 21:10:39.911983  215319 cli_runner.go:164] Run: docker container inspect embed-certs-121127 --format={{.State.Status}}
	I1120 21:10:39.942192  215319 cli_runner.go:164] Run: docker exec embed-certs-121127 stat /var/lib/dpkg/alternatives/iptables
	I1120 21:10:40.016882  215319 oci.go:144] the created container "embed-certs-121127" has a running status.
	I1120 21:10:40.016911  215319 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21923-2300/.minikube/machines/embed-certs-121127/id_rsa...
	I1120 21:10:40.545644  215319 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21923-2300/.minikube/machines/embed-certs-121127/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1120 21:10:40.577441  215319 cli_runner.go:164] Run: docker container inspect embed-certs-121127 --format={{.State.Status}}
	I1120 21:10:40.606642  215319 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1120 21:10:40.606662  215319 kic_runner.go:114] Args: [docker exec --privileged embed-certs-121127 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1120 21:10:40.698547  215319 cli_runner.go:164] Run: docker container inspect embed-certs-121127 --format={{.State.Status}}
	I1120 21:10:40.728147  215319 machine.go:94] provisionDockerMachine start ...
	I1120 21:10:40.728241  215319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-121127
	I1120 21:10:40.753894  215319 main.go:143] libmachine: Using SSH client type: native
	I1120 21:10:40.756191  215319 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1120 21:10:40.756212  215319 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:10:40.757811  215319 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 21:10:40.404163  213043 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (4.03136077s)
	I1120 21:10:40.404192  213043 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1120 21:10:40.404212  213043 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1120 21:10:40.404265  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1120 21:10:40.404354  213043 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.020070988s)
	I1120 21:10:40.404393  213043 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:10:41.950235  213043 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.54581724s)
	I1120 21:10:41.950249  213043 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (1.545959765s)
	I1120 21:10:41.950265  213043 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1120 21:10:41.950282  213043 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1120 21:10:41.950283  213043 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1120 21:10:41.950341  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1
	I1120 21:10:41.950375  213043 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1120 21:10:43.404202  213043 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.453805888s)
	I1120 21:10:43.404216  213043 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1: (1.453855522s)
	I1120 21:10:43.404229  213043 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1120 21:10:43.404235  213043 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1120 21:10:43.404253  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1120 21:10:43.404262  213043 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1120 21:10:43.404305  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1120 21:10:44.415608  213043 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.011280814s)
	I1120 21:10:44.415633  213043 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1120 21:10:44.415653  213043 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1120 21:10:44.415699  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1120 21:10:43.906323  215319 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-121127
	
	I1120 21:10:43.906398  215319 ubuntu.go:182] provisioning hostname "embed-certs-121127"
	I1120 21:10:43.906523  215319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-121127
	I1120 21:10:43.926276  215319 main.go:143] libmachine: Using SSH client type: native
	I1120 21:10:43.926627  215319 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1120 21:10:43.926645  215319 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-121127 && echo "embed-certs-121127" | sudo tee /etc/hostname
	I1120 21:10:44.095040  215319 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-121127
	
	I1120 21:10:44.095145  215319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-121127
	I1120 21:10:44.122076  215319 main.go:143] libmachine: Using SSH client type: native
	I1120 21:10:44.122377  215319 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1120 21:10:44.122394  215319 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-121127' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-121127/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-121127' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:10:44.272543  215319 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:10:44.272616  215319 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-2300/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-2300/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-2300/.minikube}
	I1120 21:10:44.272648  215319 ubuntu.go:190] setting up certificates
	I1120 21:10:44.272694  215319 provision.go:84] configureAuth start
	I1120 21:10:44.272795  215319 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-121127
	I1120 21:10:44.295429  215319 provision.go:143] copyHostCerts
	I1120 21:10:44.295506  215319 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-2300/.minikube/ca.pem, removing ...
	I1120 21:10:44.295515  215319 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-2300/.minikube/ca.pem
	I1120 21:10:44.295590  215319 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-2300/.minikube/ca.pem (1078 bytes)
	I1120 21:10:44.295690  215319 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-2300/.minikube/cert.pem, removing ...
	I1120 21:10:44.295696  215319 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-2300/.minikube/cert.pem
	I1120 21:10:44.295721  215319 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-2300/.minikube/cert.pem (1123 bytes)
	I1120 21:10:44.295770  215319 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-2300/.minikube/key.pem, removing ...
	I1120 21:10:44.295774  215319 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-2300/.minikube/key.pem
	I1120 21:10:44.295797  215319 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-2300/.minikube/key.pem (1675 bytes)
	I1120 21:10:44.295842  215319 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-2300/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca-key.pem org=jenkins.embed-certs-121127 san=[127.0.0.1 192.168.85.2 embed-certs-121127 localhost minikube]
	I1120 21:10:44.963289  215319 provision.go:177] copyRemoteCerts
	I1120 21:10:44.963368  215319 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:10:44.963416  215319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-121127
	I1120 21:10:44.981983  215319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/embed-certs-121127/id_rsa Username:docker}
	I1120 21:10:45.114970  215319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1120 21:10:45.164130  215319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 21:10:45.266347  215319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1120 21:10:45.333899  215319 provision.go:87] duration metric: took 1.061173746s to configureAuth
	I1120 21:10:45.333993  215319 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:10:45.334254  215319 config.go:182] Loaded profile config "embed-certs-121127": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 21:10:45.334310  215319 machine.go:97] duration metric: took 4.606143409s to provisionDockerMachine
	I1120 21:10:45.334332  215319 client.go:176] duration metric: took 12.760939004s to LocalClient.Create
	I1120 21:10:45.334394  215319 start.go:167] duration metric: took 12.761046313s to libmachine.API.Create "embed-certs-121127"
	I1120 21:10:45.334423  215319 start.go:293] postStartSetup for "embed-certs-121127" (driver="docker")
	I1120 21:10:45.334466  215319 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:10:45.334557  215319 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:10:45.334632  215319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-121127
	I1120 21:10:45.362644  215319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/embed-certs-121127/id_rsa Username:docker}
	I1120 21:10:45.472580  215319 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:10:45.476699  215319 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:10:45.476780  215319 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:10:45.476813  215319 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-2300/.minikube/addons for local assets ...
	I1120 21:10:45.476891  215319 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-2300/.minikube/files for local assets ...
	I1120 21:10:45.477009  215319 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-2300/.minikube/files/etc/ssl/certs/40892.pem -> 40892.pem in /etc/ssl/certs
	I1120 21:10:45.477174  215319 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:10:45.485880  215319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/files/etc/ssl/certs/40892.pem --> /etc/ssl/certs/40892.pem (1708 bytes)
	I1120 21:10:45.521795  215319 start.go:296] duration metric: took 187.325152ms for postStartSetup
	I1120 21:10:45.522236  215319 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-121127
	I1120 21:10:45.541305  215319 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/config.json ...
	I1120 21:10:45.541583  215319 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:10:45.541627  215319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-121127
	I1120 21:10:45.560207  215319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/embed-certs-121127/id_rsa Username:docker}
	I1120 21:10:45.666038  215319 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:10:45.673583  215319 start.go:128] duration metric: took 13.104071835s to createHost
	I1120 21:10:45.673657  215319 start.go:83] releasing machines lock for "embed-certs-121127", held for 13.104264806s
	I1120 21:10:45.673772  215319 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-121127
	I1120 21:10:45.691364  215319 ssh_runner.go:195] Run: cat /version.json
	I1120 21:10:45.691417  215319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-121127
	I1120 21:10:45.691707  215319 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:10:45.691775  215319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-121127
	I1120 21:10:45.721473  215319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/embed-certs-121127/id_rsa Username:docker}
	I1120 21:10:45.721526  215319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/embed-certs-121127/id_rsa Username:docker}
	I1120 21:10:45.926136  215319 ssh_runner.go:195] Run: systemctl --version
	I1120 21:10:45.933647  215319 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:10:45.938244  215319 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:10:45.938311  215319 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:10:45.977247  215319 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1120 21:10:45.977275  215319 start.go:496] detecting cgroup driver to use...
	I1120 21:10:45.977309  215319 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1120 21:10:45.977358  215319 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1120 21:10:45.994321  215319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1120 21:10:46.012099  215319 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:10:46.012157  215319 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:10:46.041743  215319 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:10:46.071929  215319 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:10:46.211976  215319 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:10:46.376132  215319 docker.go:234] disabling docker service ...
	I1120 21:10:46.376211  215319 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:10:46.408443  215319 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:10:46.425330  215319 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:10:46.587808  215319 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:10:46.744357  215319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:10:46.763278  215319 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:10:46.782798  215319 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1120 21:10:46.793468  215319 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1120 21:10:46.805559  215319 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1120 21:10:46.805672  215319 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1120 21:10:46.817132  215319 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1120 21:10:46.828734  215319 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1120 21:10:46.839608  215319 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1120 21:10:46.850596  215319 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:10:46.862099  215319 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1120 21:10:46.875041  215319 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1120 21:10:46.885476  215319 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1120 21:10:46.899612  215319 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:10:46.908798  215319 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:10:46.917346  215319 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:10:47.072258  215319 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1120 21:10:47.275782  215319 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1120 21:10:47.275901  215319 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1120 21:10:47.281032  215319 start.go:564] Will wait 60s for crictl version
	I1120 21:10:47.281114  215319 ssh_runner.go:195] Run: which crictl
	I1120 21:10:47.285086  215319 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:10:47.329951  215319 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1120 21:10:47.330037  215319 ssh_runner.go:195] Run: containerd --version
	I1120 21:10:47.361602  215319 ssh_runner.go:195] Run: containerd --version
	I1120 21:10:47.389507  215319 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1120 21:10:45.870691  213043 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.45496921s)
	I1120 21:10:45.870720  213043 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1120 21:10:45.870741  213043 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1120 21:10:45.870787  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1120 21:10:47.392809  215319 cli_runner.go:164] Run: docker network inspect embed-certs-121127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:10:47.412640  215319 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1120 21:10:47.419211  215319 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:10:47.432953  215319 kubeadm.go:884] updating cluster {Name:embed-certs-121127 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-121127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:10:47.433071  215319 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1120 21:10:47.433139  215319 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:10:47.460258  215319 containerd.go:627] all images are preloaded for containerd runtime.
	I1120 21:10:47.460278  215319 containerd.go:534] Images already preloaded, skipping extraction
	I1120 21:10:47.460337  215319 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:10:47.489781  215319 containerd.go:627] all images are preloaded for containerd runtime.
	I1120 21:10:47.489865  215319 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:10:47.489887  215319 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1120 21:10:47.490042  215319 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-121127 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-121127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:10:47.490138  215319 ssh_runner.go:195] Run: sudo crictl info
	I1120 21:10:47.517409  215319 cni.go:84] Creating CNI manager for ""
	I1120 21:10:47.517467  215319 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 21:10:47.517481  215319 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 21:10:47.517503  215319 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-121127 NodeName:embed-certs-121127 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:10:47.517622  215319 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-121127"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:10:47.517689  215319 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:10:47.528700  215319 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:10:47.528768  215319 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 21:10:47.539896  215319 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1120 21:10:47.556476  215319 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:10:47.573801  215319 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1120 21:10:47.590490  215319 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1120 21:10:47.594962  215319 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:10:47.606556  215319 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:10:47.739301  215319 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:10:47.761249  215319 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127 for IP: 192.168.85.2
	I1120 21:10:47.761272  215319 certs.go:195] generating shared ca certs ...
	I1120 21:10:47.761288  215319 certs.go:227] acquiring lock for ca certs: {Name:mke329f4cdcc6bfc142b6fc6817600b3d33b3062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:47.761463  215319 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-2300/.minikube/ca.key
	I1120 21:10:47.761507  215319 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-2300/.minikube/proxy-client-ca.key
	I1120 21:10:47.761519  215319 certs.go:257] generating profile certs ...
	I1120 21:10:47.761589  215319 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/client.key
	I1120 21:10:47.761613  215319 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/client.crt with IP's: []
	I1120 21:10:48.111681  215319 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/client.crt ...
	I1120 21:10:48.111757  215319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/client.crt: {Name:mk41e49e5955215c92b66f29e111e723c695d93e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:48.111993  215319 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/client.key ...
	I1120 21:10:48.112028  215319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/client.key: {Name:mkcdb564eebad1869884c43fbb1e957ef4199a1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:48.112159  215319 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/apiserver.key.3d6a70a3
	I1120 21:10:48.112199  215319 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/apiserver.crt.3d6a70a3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1120 21:10:48.429809  215319 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/apiserver.crt.3d6a70a3 ...
	I1120 21:10:48.429878  215319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/apiserver.crt.3d6a70a3: {Name:mkcfae9cc43f66e3cf9a5997127280ec140cdb2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:48.430096  215319 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/apiserver.key.3d6a70a3 ...
	I1120 21:10:48.430130  215319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/apiserver.key.3d6a70a3: {Name:mke19264fba2e1ecf4c132bc0912f71b112c201b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:48.430266  215319 certs.go:382] copying /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/apiserver.crt.3d6a70a3 -> /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/apiserver.crt
	I1120 21:10:48.430401  215319 certs.go:386] copying /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/apiserver.key.3d6a70a3 -> /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/apiserver.key
	I1120 21:10:48.430529  215319 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/proxy-client.key
	I1120 21:10:48.430568  215319 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/proxy-client.crt with IP's: []
	I1120 21:10:48.940671  215319 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/proxy-client.crt ...
	I1120 21:10:48.940739  215319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/proxy-client.crt: {Name:mk71966f91c454d889688da2933343c6c48dec89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:48.940932  215319 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/proxy-client.key ...
	I1120 21:10:48.940964  215319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/proxy-client.key: {Name:mke99021ae8c3cb7e2eb27ac89c7511ee24bece4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:48.941210  215319 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/4089.pem (1338 bytes)
	W1120 21:10:48.941273  215319 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-2300/.minikube/certs/4089_empty.pem, impossibly tiny 0 bytes
	I1120 21:10:48.941297  215319 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:10:48.941351  215319 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:10:48.941399  215319 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:10:48.941438  215319 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/key.pem (1675 bytes)
	I1120 21:10:48.941513  215319 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/files/etc/ssl/certs/40892.pem (1708 bytes)
	I1120 21:10:48.942105  215319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:10:48.959701  215319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:10:48.977305  215319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:10:48.995854  215319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 21:10:49.013559  215319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1120 21:10:49.033271  215319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 21:10:49.051611  215319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:10:49.069777  215319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/embed-certs-121127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1120 21:10:49.087934  215319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/files/etc/ssl/certs/40892.pem --> /usr/share/ca-certificates/40892.pem (1708 bytes)
	I1120 21:10:49.106276  215319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:10:49.124422  215319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/certs/4089.pem --> /usr/share/ca-certificates/4089.pem (1338 bytes)
	I1120 21:10:49.143075  215319 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:10:49.156866  215319 ssh_runner.go:195] Run: openssl version
	I1120 21:10:49.164193  215319 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/40892.pem
	I1120 21:10:49.173583  215319 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/40892.pem /etc/ssl/certs/40892.pem
	I1120 21:10:49.181808  215319 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/40892.pem
	I1120 21:10:49.186785  215319 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:28 /usr/share/ca-certificates/40892.pem
	I1120 21:10:49.186866  215319 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/40892.pem
	I1120 21:10:49.229204  215319 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:10:49.237249  215319 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/40892.pem /etc/ssl/certs/3ec20f2e.0
	I1120 21:10:49.245475  215319 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:10:49.253641  215319 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:10:49.261689  215319 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:10:49.266210  215319 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:21 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:10:49.266280  215319 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:10:49.310753  215319 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:10:49.318839  215319 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 21:10:49.326973  215319 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4089.pem
	I1120 21:10:49.335207  215319 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4089.pem /etc/ssl/certs/4089.pem
	I1120 21:10:49.343340  215319 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4089.pem
	I1120 21:10:49.347817  215319 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:28 /usr/share/ca-certificates/4089.pem
	I1120 21:10:49.347901  215319 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4089.pem
	I1120 21:10:49.390888  215319 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:10:49.398902  215319 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4089.pem /etc/ssl/certs/51391683.0
	I1120 21:10:49.406620  215319 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:10:49.411818  215319 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 21:10:49.411898  215319 kubeadm.go:401] StartCluster: {Name:embed-certs-121127 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-121127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:10:49.411983  215319 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1120 21:10:49.412061  215319 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:10:49.448788  215319 cri.go:89] found id: ""
	I1120 21:10:49.448889  215319 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:10:49.459289  215319 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 21:10:49.467442  215319 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1120 21:10:49.467529  215319 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 21:10:49.478932  215319 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 21:10:49.478966  215319 kubeadm.go:158] found existing configuration files:
	
	I1120 21:10:49.479017  215319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1120 21:10:49.487902  215319 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 21:10:49.487981  215319 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 21:10:49.495649  215319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1120 21:10:49.506806  215319 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 21:10:49.506873  215319 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 21:10:49.521609  215319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1120 21:10:49.534189  215319 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 21:10:49.534265  215319 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 21:10:49.549708  215319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1120 21:10:49.574323  215319 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 21:10:49.574407  215319 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 21:10:49.598504  215319 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1120 21:10:49.657069  215319 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1120 21:10:49.657607  215319 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 21:10:49.690719  215319 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1120 21:10:49.690818  215319 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1120 21:10:49.690860  215319 kubeadm.go:319] OS: Linux
	I1120 21:10:49.690921  215319 kubeadm.go:319] CGROUPS_CPU: enabled
	I1120 21:10:49.690987  215319 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1120 21:10:49.691053  215319 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1120 21:10:49.691116  215319 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1120 21:10:49.691181  215319 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1120 21:10:49.691246  215319 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1120 21:10:49.691307  215319 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1120 21:10:49.691381  215319 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1120 21:10:49.691442  215319 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1120 21:10:49.795770  215319 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 21:10:49.795897  215319 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 21:10:49.796010  215319 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1120 21:10:49.802148  215319 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 21:10:49.804142  215319 out.go:252]   - Generating certificates and keys ...
	I1120 21:10:49.804262  215319 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 21:10:49.804356  215319 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 21:10:50.453181  215319 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 21:10:51.347078  215319 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 21:10:52.173658  215319 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 21:10:49.762274  213043 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (3.8914586s)
	I1120 21:10:49.762303  213043 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1120 21:10:49.762324  213043 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1120 21:10:49.762373  213043 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1120 21:10:50.243577  213043 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1120 21:10:50.243620  213043 cache_images.go:125] Successfully loaded all cached images
	I1120 21:10:50.243627  213043 cache_images.go:94] duration metric: took 16.252082759s to LoadCachedImages
	I1120 21:10:50.243639  213043 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1120 21:10:50.243738  213043 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-882483 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-882483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:10:50.243810  213043 ssh_runner.go:195] Run: sudo crictl info
	I1120 21:10:50.287246  213043 cni.go:84] Creating CNI manager for ""
	I1120 21:10:50.287270  213043 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 21:10:50.287285  213043 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 21:10:50.287308  213043 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-882483 NodeName:no-preload-882483 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:10:50.287428  213043 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-882483"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:10:50.287508  213043 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:10:50.295696  213043 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1120 21:10:50.295760  213043 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1120 21:10:50.304146  213043 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1120 21:10:50.304246  213043 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1120 21:10:50.305597  213043 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21923-2300/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1120 21:10:50.306023  213043 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21923-2300/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1120 21:10:50.310024  213043 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1120 21:10:50.310058  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1120 21:10:51.235266  213043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:10:51.284371  213043 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1120 21:10:51.295708  213043 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1120 21:10:51.295752  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1120 21:10:51.339481  213043 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1120 21:10:51.361779  213043 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1120 21:10:51.361823  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1120 21:10:52.042136  213043 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 21:10:52.051916  213043 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1120 21:10:52.070654  213043 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:10:52.088633  213043 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1120 21:10:52.103541  213043 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1120 21:10:52.108452  213043 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:10:52.119300  213043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:10:52.275826  213043 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:10:52.301903  213043 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483 for IP: 192.168.76.2
	I1120 21:10:52.301978  213043 certs.go:195] generating shared ca certs ...
	I1120 21:10:52.302013  213043 certs.go:227] acquiring lock for ca certs: {Name:mke329f4cdcc6bfc142b6fc6817600b3d33b3062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:52.302219  213043 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-2300/.minikube/ca.key
	I1120 21:10:52.302301  213043 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-2300/.minikube/proxy-client-ca.key
	I1120 21:10:52.302343  213043 certs.go:257] generating profile certs ...
	I1120 21:10:52.302455  213043 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/client.key
	I1120 21:10:52.302496  213043 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/client.crt with IP's: []
	I1120 21:10:52.475055  213043 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/client.crt ...
	I1120 21:10:52.475158  213043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/client.crt: {Name:mke16c272213fcda87d56ed6709d26dba4d62f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:52.475447  213043 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/client.key ...
	I1120 21:10:52.475489  213043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/client.key: {Name:mk11537008085ba18fd08498bb3cd3d67a88403c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:52.475675  213043 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/apiserver.key.5ea2376b
	I1120 21:10:52.475739  213043 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/apiserver.crt.5ea2376b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1120 21:10:52.953001  213043 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/apiserver.crt.5ea2376b ...
	I1120 21:10:52.953086  213043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/apiserver.crt.5ea2376b: {Name:mkccfcd6f9d0ee0e8ecb43b201b9616e06251f37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:52.953331  213043 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/apiserver.key.5ea2376b ...
	I1120 21:10:52.953393  213043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/apiserver.key.5ea2376b: {Name:mka71b572b1de86324bfa1c51fcf20ebd1fd56e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:52.953549  213043 certs.go:382] copying /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/apiserver.crt.5ea2376b -> /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/apiserver.crt
	I1120 21:10:52.953717  213043 certs.go:386] copying /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/apiserver.key.5ea2376b -> /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/apiserver.key
	I1120 21:10:52.953877  213043 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/proxy-client.key
	I1120 21:10:52.953934  213043 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/proxy-client.crt with IP's: []
	I1120 21:10:54.102166  213043 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/proxy-client.crt ...
	I1120 21:10:54.102275  213043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/proxy-client.crt: {Name:mk3ef8ad5b02a9ed720dd5219a0dec14ba23c27c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:54.102547  213043 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/proxy-client.key ...
	I1120 21:10:54.102613  213043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/proxy-client.key: {Name:mk9ecc41c22dd18731c34c27ecd7ca439520a1a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:10:54.102982  213043 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/4089.pem (1338 bytes)
	W1120 21:10:54.103084  213043 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-2300/.minikube/certs/4089_empty.pem, impossibly tiny 0 bytes
	I1120 21:10:54.103127  213043 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:10:54.103203  213043 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem (1078 bytes)
	I1120 21:10:54.103275  213043 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:10:54.103342  213043 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/certs/key.pem (1675 bytes)
	I1120 21:10:54.103436  213043 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-2300/.minikube/files/etc/ssl/certs/40892.pem (1708 bytes)
	I1120 21:10:54.104333  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:10:54.139019  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1120 21:10:54.158056  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:10:54.178364  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 21:10:54.199024  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1120 21:10:54.219004  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 21:10:54.241468  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:10:54.266858  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:10:54.287775  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:10:54.309214  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/certs/4089.pem --> /usr/share/ca-certificates/4089.pem (1338 bytes)
	I1120 21:10:54.330497  213043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-2300/.minikube/files/etc/ssl/certs/40892.pem --> /usr/share/ca-certificates/40892.pem (1708 bytes)
	I1120 21:10:54.350701  213043 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:10:54.376543  213043 ssh_runner.go:195] Run: openssl version
	I1120 21:10:54.386156  213043 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:10:54.395660  213043 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:10:54.405357  213043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:10:54.410953  213043 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:21 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:10:54.411109  213043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:10:54.463743  213043 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:10:54.474887  213043 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 21:10:54.485054  213043 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4089.pem
	I1120 21:10:54.495515  213043 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4089.pem /etc/ssl/certs/4089.pem
	I1120 21:10:54.505920  213043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4089.pem
	I1120 21:10:54.511335  213043 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:28 /usr/share/ca-certificates/4089.pem
	I1120 21:10:54.511475  213043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4089.pem
	I1120 21:10:54.558639  213043 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:10:54.569012  213043 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4089.pem /etc/ssl/certs/51391683.0
	I1120 21:10:54.579698  213043 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/40892.pem
	I1120 21:10:54.589945  213043 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/40892.pem /etc/ssl/certs/40892.pem
	I1120 21:10:54.600319  213043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/40892.pem
	I1120 21:10:54.606020  213043 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:28 /usr/share/ca-certificates/40892.pem
	I1120 21:10:54.606167  213043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/40892.pem
	I1120 21:10:54.652833  213043 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:10:54.663253  213043 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/40892.pem /etc/ssl/certs/3ec20f2e.0
	I1120 21:10:54.674879  213043 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:10:54.680865  213043 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 21:10:54.680998  213043 kubeadm.go:401] StartCluster: {Name:no-preload-882483 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-882483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:10:54.681127  213043 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1120 21:10:54.681238  213043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:10:54.719310  213043 cri.go:89] found id: ""
	I1120 21:10:54.719430  213043 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:10:54.730788  213043 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 21:10:54.740813  213043 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1120 21:10:54.740925  213043 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 21:10:54.762317  213043 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 21:10:54.762487  213043 kubeadm.go:158] found existing configuration files:
	
	I1120 21:10:54.762568  213043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1120 21:10:54.779135  213043 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 21:10:54.779243  213043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 21:10:54.794299  213043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1120 21:10:54.814263  213043 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 21:10:54.814388  213043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 21:10:54.827373  213043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1120 21:10:54.837939  213043 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 21:10:54.838093  213043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 21:10:54.847869  213043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1120 21:10:54.858397  213043 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 21:10:54.858626  213043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 21:10:54.867935  213043 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1120 21:10:54.934595  213043 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1120 21:10:54.935041  213043 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 21:10:54.975377  213043 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1120 21:10:54.975574  213043 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1120 21:10:54.975663  213043 kubeadm.go:319] OS: Linux
	I1120 21:10:54.975732  213043 kubeadm.go:319] CGROUPS_CPU: enabled
	I1120 21:10:54.975807  213043 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1120 21:10:54.975870  213043 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1120 21:10:54.975929  213043 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1120 21:10:54.975984  213043 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1120 21:10:54.976039  213043 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1120 21:10:54.976090  213043 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1120 21:10:54.976148  213043 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1120 21:10:54.976202  213043 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1120 21:10:55.105807  213043 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 21:10:55.106019  213043 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 21:10:55.106174  213043 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1120 21:10:55.124866  213043 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 21:10:52.295500  215319 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 21:10:52.922854  215319 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 21:10:52.923006  215319 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-121127 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1120 21:10:53.618363  215319 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 21:10:53.618710  215319 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-121127 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1120 21:10:54.678656  215319 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 21:10:55.022908  215319 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 21:10:55.930976  215319 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 21:10:55.931059  215319 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 21:10:56.196775  215319 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 21:10:56.256859  215319 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1120 21:10:57.480653  215319 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 21:10:57.882040  215319 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 21:10:59.008366  215319 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 21:10:59.009609  215319 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 21:10:59.012690  215319 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1120 21:10:55.129842  213043 out.go:252]   - Generating certificates and keys ...
	I1120 21:10:55.129953  213043 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 21:10:55.130034  213043 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 21:10:55.703119  213043 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 21:10:57.495554  213043 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 21:10:58.987375  213043 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 21:10:59.018119  215319 out.go:252]   - Booting up control plane ...
	I1120 21:10:59.018239  215319 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1120 21:10:59.019161  215319 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1120 21:10:59.023930  215319 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1120 21:10:59.053554  215319 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1120 21:10:59.053869  215319 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1120 21:10:59.063964  215319 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1120 21:10:59.064275  215319 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1120 21:10:59.064323  215319 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1120 21:10:59.223517  215319 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1120 21:10:59.223653  215319 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1120 21:11:00.233657  215319 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.006765293s
	I1120 21:11:00.234056  215319 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1120 21:11:00.234173  215319 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1120 21:11:00.234269  215319 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1120 21:11:00.234352  215319 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1120 21:11:00.160193  213043 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 21:11:00.487451  213043 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 21:11:00.487606  213043 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-882483] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1120 21:11:01.362818  213043 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 21:11:01.362965  213043 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-882483] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1120 21:11:01.950793  213043 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 21:11:02.234780  213043 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 21:11:02.354738  213043 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 21:11:02.354814  213043 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 21:11:03.294772  213043 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 21:11:03.782757  213043 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1120 21:11:04.193232  213043 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 21:11:04.382767  213043 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 21:11:04.860708  213043 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 21:11:04.860820  213043 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 21:11:04.863738  213043 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1120 21:11:06.422787  215319 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.18440449s
	I1120 21:11:04.867234  213043 out.go:252]   - Booting up control plane ...
	I1120 21:11:04.867345  213043 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1120 21:11:04.867426  213043 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1120 21:11:04.869523  213043 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1120 21:11:04.899399  213043 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1120 21:11:04.899521  213043 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1120 21:11:04.910846  213043 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1120 21:11:04.910950  213043 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1120 21:11:04.910992  213043 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1120 21:11:05.165409  213043 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1120 21:11:05.165535  213043 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1120 21:11:07.666553  213043 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.50132257s
	I1120 21:11:07.673736  213043 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1120 21:11:07.673836  213043 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1120 21:11:07.673929  213043 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1120 21:11:07.674011  213043 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1120 21:11:10.140123  215319 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 9.906080727s
	I1120 21:11:10.236438  215319 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.001234957s
	I1120 21:11:10.277276  215319 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 21:11:10.297843  215319 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 21:11:10.325911  215319 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 21:11:10.326122  215319 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-121127 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 21:11:10.343351  215319 kubeadm.go:319] [bootstrap-token] Using token: g2mfyc.h1z2cs46qltqtwt7
	I1120 21:11:10.346280  215319 out.go:252]   - Configuring RBAC rules ...
	I1120 21:11:10.346395  215319 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 21:11:10.355545  215319 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 21:11:10.371412  215319 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 21:11:10.379803  215319 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 21:11:10.384535  215319 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 21:11:10.389891  215319 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 21:11:10.651084  215319 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 21:11:11.178743  215319 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 21:11:11.643258  215319 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 21:11:11.644866  215319 kubeadm.go:319] 
	I1120 21:11:11.644956  215319 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 21:11:11.644967  215319 kubeadm.go:319] 
	I1120 21:11:11.645048  215319 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 21:11:11.645057  215319 kubeadm.go:319] 
	I1120 21:11:11.645083  215319 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 21:11:11.645148  215319 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 21:11:11.645207  215319 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 21:11:11.645216  215319 kubeadm.go:319] 
	I1120 21:11:11.645273  215319 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 21:11:11.645282  215319 kubeadm.go:319] 
	I1120 21:11:11.645332  215319 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 21:11:11.645340  215319 kubeadm.go:319] 
	I1120 21:11:11.645395  215319 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 21:11:11.645477  215319 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 21:11:11.645552  215319 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 21:11:11.645561  215319 kubeadm.go:319] 
	I1120 21:11:11.645649  215319 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 21:11:11.645732  215319 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 21:11:11.645740  215319 kubeadm.go:319] 
	I1120 21:11:11.645828  215319 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token g2mfyc.h1z2cs46qltqtwt7 \
	I1120 21:11:11.645940  215319 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:0ba1c2e2176682abc636887486248c0692df9e5785bd62c1c27f34717ff2c43f \
	I1120 21:11:11.645968  215319 kubeadm.go:319] 	--control-plane 
	I1120 21:11:11.645976  215319 kubeadm.go:319] 
	I1120 21:11:11.646064  215319 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 21:11:11.646072  215319 kubeadm.go:319] 
	I1120 21:11:11.646158  215319 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token g2mfyc.h1z2cs46qltqtwt7 \
	I1120 21:11:11.646268  215319 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:0ba1c2e2176682abc636887486248c0692df9e5785bd62c1c27f34717ff2c43f 
	I1120 21:11:11.655114  215319 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1120 21:11:11.655359  215319 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1120 21:11:11.655475  215319 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1120 21:11:11.655496  215319 cni.go:84] Creating CNI manager for ""
	I1120 21:11:11.655507  215319 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 21:11:11.659138  215319 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1120 21:11:11.662081  215319 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1120 21:11:11.673041  215319 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1120 21:11:11.673064  215319 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1120 21:11:11.710439  215319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1120 21:11:12.218534  215319 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 21:11:12.218778  215319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:12.218980  215319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-121127 minikube.k8s.io/updated_at=2025_11_20T21_11_12_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=embed-certs-121127 minikube.k8s.io/primary=true
	I1120 21:11:12.731728  215319 ops.go:34] apiserver oom_adj: -16
	I1120 21:11:12.731867  215319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:13.232731  215319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:13.731892  215319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:14.232449  215319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:14.732623  215319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:15.232406  215319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:15.732248  215319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:16.232649  215319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:16.732656  215319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:16.983771  215319 kubeadm.go:1114] duration metric: took 4.765063909s to wait for elevateKubeSystemPrivileges
	I1120 21:11:16.983860  215319 kubeadm.go:403] duration metric: took 27.571965786s to StartCluster
	I1120 21:11:16.983894  215319 settings.go:142] acquiring lock: {Name:mk8f1e96fadc1ef170d5eddc49033a884865c024 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:16.983996  215319 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-2300/kubeconfig
	I1120 21:11:16.985044  215319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/kubeconfig: {Name:mk7ea52a23a4d9fc2da4c68a59479b947db5281c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:16.985390  215319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 21:11:16.985651  215319 config.go:182] Loaded profile config "embed-certs-121127": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 21:11:16.985828  215319 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 21:11:16.985907  215319 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-121127"
	I1120 21:11:16.985927  215319 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-121127"
	I1120 21:11:16.985951  215319 host.go:66] Checking if "embed-certs-121127" exists ...
	I1120 21:11:16.986480  215319 cli_runner.go:164] Run: docker container inspect embed-certs-121127 --format={{.State.Status}}
	I1120 21:11:16.986648  215319 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1120 21:11:16.987026  215319 addons.go:70] Setting default-storageclass=true in profile "embed-certs-121127"
	I1120 21:11:16.987047  215319 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-121127"
	I1120 21:11:16.987307  215319 cli_runner.go:164] Run: docker container inspect embed-certs-121127 --format={{.State.Status}}
	I1120 21:11:16.994548  215319 out.go:179] * Verifying Kubernetes components...
	I1120 21:11:16.997459  215319 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:11:17.032412  215319 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:11:17.035320  215319 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:11:17.035342  215319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 21:11:17.035408  215319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-121127
	I1120 21:11:17.043766  215319 addons.go:239] Setting addon default-storageclass=true in "embed-certs-121127"
	I1120 21:11:17.043811  215319 host.go:66] Checking if "embed-certs-121127" exists ...
	I1120 21:11:17.044255  215319 cli_runner.go:164] Run: docker container inspect embed-certs-121127 --format={{.State.Status}}
	I1120 21:11:17.084660  215319 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 21:11:17.084683  215319 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 21:11:17.084813  215319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-121127
	I1120 21:11:17.092328  215319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/embed-certs-121127/id_rsa Username:docker}
	I1120 21:11:17.116177  215319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/embed-certs-121127/id_rsa Username:docker}
	I1120 21:11:17.534085  215319 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:11:17.534275  215319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 21:11:17.612534  215319 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 21:11:17.775943  215319 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:11:18.900813  215319 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.366494688s)
	I1120 21:11:18.900857  215319 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1120 21:11:18.901467  215319 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.367345174s)
	I1120 21:11:18.903758  215319 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.29117473s)
	I1120 21:11:18.904554  215319 node_ready.go:35] waiting up to 6m0s for node "embed-certs-121127" to be "Ready" ...
	I1120 21:11:19.381801  215319 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.605810992s)
	I1120 21:11:19.384863  215319 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1120 21:11:17.344135  213043 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 9.673723727s
	I1120 21:11:18.482804  213043 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 10.812762535s
	I1120 21:11:20.179642  213043 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 12.508029475s
	I1120 21:11:20.218353  213043 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 21:11:20.236040  213043 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 21:11:20.253784  213043 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 21:11:20.254085  213043 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-882483 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 21:11:20.265837  213043 kubeadm.go:319] [bootstrap-token] Using token: ywj62v.23n6crze3giefwpo
	I1120 21:11:20.268865  213043 out.go:252]   - Configuring RBAC rules ...
	I1120 21:11:20.269002  213043 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 21:11:20.275805  213043 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 21:11:20.284587  213043 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 21:11:20.288932  213043 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 21:11:20.293256  213043 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 21:11:20.299239  213043 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 21:11:20.592688  213043 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 21:11:21.061072  213043 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 21:11:21.593533  213043 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 21:11:21.594862  213043 kubeadm.go:319] 
	I1120 21:11:21.594940  213043 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 21:11:21.594950  213043 kubeadm.go:319] 
	I1120 21:11:21.595034  213043 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 21:11:21.595041  213043 kubeadm.go:319] 
	I1120 21:11:21.595068  213043 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 21:11:21.595133  213043 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 21:11:21.595189  213043 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 21:11:21.595196  213043 kubeadm.go:319] 
	I1120 21:11:21.595253  213043 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 21:11:21.595261  213043 kubeadm.go:319] 
	I1120 21:11:21.595311  213043 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 21:11:21.595319  213043 kubeadm.go:319] 
	I1120 21:11:21.595373  213043 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 21:11:21.595455  213043 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 21:11:21.595530  213043 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 21:11:21.595539  213043 kubeadm.go:319] 
	I1120 21:11:21.595628  213043 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 21:11:21.595712  213043 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 21:11:21.595720  213043 kubeadm.go:319] 
	I1120 21:11:21.595815  213043 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ywj62v.23n6crze3giefwpo \
	I1120 21:11:21.595926  213043 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:0ba1c2e2176682abc636887486248c0692df9e5785bd62c1c27f34717ff2c43f \
	I1120 21:11:21.595951  213043 kubeadm.go:319] 	--control-plane 
	I1120 21:11:21.595959  213043 kubeadm.go:319] 
	I1120 21:11:21.596048  213043 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 21:11:21.596056  213043 kubeadm.go:319] 
	I1120 21:11:21.596141  213043 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ywj62v.23n6crze3giefwpo \
	I1120 21:11:21.596251  213043 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:0ba1c2e2176682abc636887486248c0692df9e5785bd62c1c27f34717ff2c43f 
	I1120 21:11:21.599732  213043 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1120 21:11:21.599979  213043 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1120 21:11:21.600091  213043 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1120 21:11:21.600143  213043 cni.go:84] Creating CNI manager for ""
	I1120 21:11:21.600153  213043 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 21:11:21.603419  213043 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1120 21:11:19.387776  215319 addons.go:515] duration metric: took 2.401930065s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1120 21:11:19.406917  215319 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-121127" context rescaled to 1 replicas
	W1120 21:11:20.907507  215319 node_ready.go:57] node "embed-certs-121127" has "Ready":"False" status (will retry)
	I1120 21:11:21.606304  213043 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1120 21:11:21.614557  213043 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1120 21:11:21.614580  213043 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1120 21:11:21.634259  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1120 21:11:21.982875  213043 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 21:11:21.983009  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:21.983088  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-882483 minikube.k8s.io/updated_at=2025_11_20T21_11_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=no-preload-882483 minikube.k8s.io/primary=true
	I1120 21:11:22.007465  213043 ops.go:34] apiserver oom_adj: -16
	I1120 21:11:22.157159  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:22.657268  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:23.157803  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:23.657236  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:24.157477  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:24.657860  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:25.157411  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:25.657702  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:26.157843  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:11:26.285482  213043 kubeadm.go:1114] duration metric: took 4.302523321s to wait for elevateKubeSystemPrivileges
	I1120 21:11:26.285513  213043 kubeadm.go:403] duration metric: took 31.604531804s to StartCluster
	I1120 21:11:26.285531  213043 settings.go:142] acquiring lock: {Name:mk8f1e96fadc1ef170d5eddc49033a884865c024 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:26.285593  213043 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-2300/kubeconfig
	I1120 21:11:26.287164  213043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/kubeconfig: {Name:mk7ea52a23a4d9fc2da4c68a59479b947db5281c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:11:26.287414  213043 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1120 21:11:26.287556  213043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 21:11:26.287944  213043 config.go:182] Loaded profile config "no-preload-882483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 21:11:26.287991  213043 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 21:11:26.288058  213043 addons.go:70] Setting storage-provisioner=true in profile "no-preload-882483"
	I1120 21:11:26.288076  213043 addons.go:239] Setting addon storage-provisioner=true in "no-preload-882483"
	I1120 21:11:26.288097  213043 host.go:66] Checking if "no-preload-882483" exists ...
	I1120 21:11:26.288521  213043 addons.go:70] Setting default-storageclass=true in profile "no-preload-882483"
	I1120 21:11:26.288541  213043 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-882483"
	I1120 21:11:26.288783  213043 cli_runner.go:164] Run: docker container inspect no-preload-882483 --format={{.State.Status}}
	I1120 21:11:26.289073  213043 cli_runner.go:164] Run: docker container inspect no-preload-882483 --format={{.State.Status}}
	I1120 21:11:26.290586  213043 out.go:179] * Verifying Kubernetes components...
	I1120 21:11:26.293477  213043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:11:26.323920  213043 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1120 21:11:23.414189  215319 node_ready.go:57] node "embed-certs-121127" has "Ready":"False" status (will retry)
	W1120 21:11:25.414872  215319 node_ready.go:57] node "embed-certs-121127" has "Ready":"False" status (will retry)
	I1120 21:11:26.327978  213043 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:11:26.328003  213043 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 21:11:26.328079  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:11:26.336255  213043 addons.go:239] Setting addon default-storageclass=true in "no-preload-882483"
	I1120 21:11:26.336296  213043 host.go:66] Checking if "no-preload-882483" exists ...
	I1120 21:11:26.336717  213043 cli_runner.go:164] Run: docker container inspect no-preload-882483 --format={{.State.Status}}
	I1120 21:11:26.360671  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/no-preload-882483/id_rsa Username:docker}
	I1120 21:11:26.376397  213043 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 21:11:26.376418  213043 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 21:11:26.376479  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-882483
	I1120 21:11:26.400988  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/no-preload-882483/id_rsa Username:docker}
	I1120 21:11:26.687850  213043 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 21:11:26.690324  213043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 21:11:26.690505  213043 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:11:26.734149  213043 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:11:27.397240  213043 node_ready.go:35] waiting up to 6m0s for node "no-preload-882483" to be "Ready" ...
	I1120 21:11:27.397539  213043 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1120 21:11:27.679069  213043 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1120 21:11:27.682766  213043 addons.go:515] duration metric: took 1.394759028s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1120 21:11:27.905580  213043 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-882483" context rescaled to 1 replicas
	W1120 21:11:29.421800  213043 node_ready.go:57] node "no-preload-882483" has "Ready":"False" status (will retry)
	W1120 21:11:27.908189  215319 node_ready.go:57] node "embed-certs-121127" has "Ready":"False" status (will retry)
	W1120 21:11:30.412440  215319 node_ready.go:57] node "embed-certs-121127" has "Ready":"False" status (will retry)
	W1120 21:11:31.900213  213043 node_ready.go:57] node "no-preload-882483" has "Ready":"False" status (will retry)
	W1120 21:11:33.901668  213043 node_ready.go:57] node "no-preload-882483" has "Ready":"False" status (will retry)
	W1120 21:11:32.907272  215319 node_ready.go:57] node "embed-certs-121127" has "Ready":"False" status (will retry)
	W1120 21:11:34.908040  215319 node_ready.go:57] node "embed-certs-121127" has "Ready":"False" status (will retry)
	W1120 21:11:36.400656  213043 node_ready.go:57] node "no-preload-882483" has "Ready":"False" status (will retry)
	W1120 21:11:38.900383  213043 node_ready.go:57] node "no-preload-882483" has "Ready":"False" status (will retry)
	W1120 21:11:37.413240  215319 node_ready.go:57] node "embed-certs-121127" has "Ready":"False" status (will retry)
	W1120 21:11:39.413709  215319 node_ready.go:57] node "embed-certs-121127" has "Ready":"False" status (will retry)
	W1120 21:11:41.907357  215319 node_ready.go:57] node "embed-certs-121127" has "Ready":"False" status (will retry)
	I1120 21:11:39.906331  213043 node_ready.go:49] node "no-preload-882483" is "Ready"
	I1120 21:11:39.906365  213043 node_ready.go:38] duration metric: took 12.509091525s for node "no-preload-882483" to be "Ready" ...
	I1120 21:11:39.906384  213043 api_server.go:52] waiting for apiserver process to appear ...
	I1120 21:11:39.906485  213043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:11:39.929872  213043 api_server.go:72] duration metric: took 13.642420091s to wait for apiserver process to appear ...
	I1120 21:11:39.929897  213043 api_server.go:88] waiting for apiserver healthz status ...
	I1120 21:11:39.929916  213043 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1120 21:11:39.959040  213043 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1120 21:11:39.960371  213043 api_server.go:141] control plane version: v1.34.1
	I1120 21:11:39.960395  213043 api_server.go:131] duration metric: took 30.49151ms to wait for apiserver health ...
	I1120 21:11:39.960406  213043 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:11:39.968917  213043 system_pods.go:59] 8 kube-system pods found
	I1120 21:11:39.968949  213043 system_pods.go:61] "coredns-66bc5c9577-kbl4d" [7e90701b-e158-4e32-b311-ef635af8eec0] Pending
	I1120 21:11:39.968956  213043 system_pods.go:61] "etcd-no-preload-882483" [2cc20a64-e788-4bd3-99ae-22f5e88054c6] Running
	I1120 21:11:39.968960  213043 system_pods.go:61] "kindnet-jr57n" [43754294-4619-410a-9cf0-01baa9df142e] Running
	I1120 21:11:39.968964  213043 system_pods.go:61] "kube-apiserver-no-preload-882483" [e355b3e2-5d70-4e01-b443-2e5c756584db] Running
	I1120 21:11:39.968969  213043 system_pods.go:61] "kube-controller-manager-no-preload-882483" [b7f3c12d-8398-4a69-8399-1c21d54c7624] Running
	I1120 21:11:39.968974  213043 system_pods.go:61] "kube-proxy-n9cg7" [77a3defc-bd58-414c-9c2a-bf750429a720] Running
	I1120 21:11:39.968980  213043 system_pods.go:61] "kube-scheduler-no-preload-882483" [40685019-a9a4-4a4a-9e42-58d2a9f39f9b] Running
	I1120 21:11:39.968993  213043 system_pods.go:61] "storage-provisioner" [1698ab50-608c-439f-b6de-81323e57d2c8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:11:39.969000  213043 system_pods.go:74] duration metric: took 8.588323ms to wait for pod list to return data ...
	I1120 21:11:39.969014  213043 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:11:39.974128  213043 default_sa.go:45] found service account: "default"
	I1120 21:11:39.974153  213043 default_sa.go:55] duration metric: took 5.133696ms for default service account to be created ...
	I1120 21:11:39.974162  213043 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 21:11:39.980371  213043 system_pods.go:86] 8 kube-system pods found
	I1120 21:11:39.980456  213043 system_pods.go:89] "coredns-66bc5c9577-kbl4d" [7e90701b-e158-4e32-b311-ef635af8eec0] Pending
	I1120 21:11:39.980476  213043 system_pods.go:89] "etcd-no-preload-882483" [2cc20a64-e788-4bd3-99ae-22f5e88054c6] Running
	I1120 21:11:39.980495  213043 system_pods.go:89] "kindnet-jr57n" [43754294-4619-410a-9cf0-01baa9df142e] Running
	I1120 21:11:39.980534  213043 system_pods.go:89] "kube-apiserver-no-preload-882483" [e355b3e2-5d70-4e01-b443-2e5c756584db] Running
	I1120 21:11:39.980557  213043 system_pods.go:89] "kube-controller-manager-no-preload-882483" [b7f3c12d-8398-4a69-8399-1c21d54c7624] Running
	I1120 21:11:39.980576  213043 system_pods.go:89] "kube-proxy-n9cg7" [77a3defc-bd58-414c-9c2a-bf750429a720] Running
	I1120 21:11:39.980616  213043 system_pods.go:89] "kube-scheduler-no-preload-882483" [40685019-a9a4-4a4a-9e42-58d2a9f39f9b] Running
	I1120 21:11:39.980644  213043 system_pods.go:89] "storage-provisioner" [1698ab50-608c-439f-b6de-81323e57d2c8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:11:39.980689  213043 retry.go:31] will retry after 211.631364ms: missing components: kube-dns
	I1120 21:11:40.197643  213043 system_pods.go:86] 8 kube-system pods found
	I1120 21:11:40.197678  213043 system_pods.go:89] "coredns-66bc5c9577-kbl4d" [7e90701b-e158-4e32-b311-ef635af8eec0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:11:40.197685  213043 system_pods.go:89] "etcd-no-preload-882483" [2cc20a64-e788-4bd3-99ae-22f5e88054c6] Running
	I1120 21:11:40.197692  213043 system_pods.go:89] "kindnet-jr57n" [43754294-4619-410a-9cf0-01baa9df142e] Running
	I1120 21:11:40.197697  213043 system_pods.go:89] "kube-apiserver-no-preload-882483" [e355b3e2-5d70-4e01-b443-2e5c756584db] Running
	I1120 21:11:40.197715  213043 system_pods.go:89] "kube-controller-manager-no-preload-882483" [b7f3c12d-8398-4a69-8399-1c21d54c7624] Running
	I1120 21:11:40.197721  213043 system_pods.go:89] "kube-proxy-n9cg7" [77a3defc-bd58-414c-9c2a-bf750429a720] Running
	I1120 21:11:40.197725  213043 system_pods.go:89] "kube-scheduler-no-preload-882483" [40685019-a9a4-4a4a-9e42-58d2a9f39f9b] Running
	I1120 21:11:40.197732  213043 system_pods.go:89] "storage-provisioner" [1698ab50-608c-439f-b6de-81323e57d2c8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:11:40.197751  213043 retry.go:31] will retry after 377.800802ms: missing components: kube-dns
	I1120 21:11:40.593257  213043 system_pods.go:86] 8 kube-system pods found
	I1120 21:11:40.593306  213043 system_pods.go:89] "coredns-66bc5c9577-kbl4d" [7e90701b-e158-4e32-b311-ef635af8eec0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:11:40.593313  213043 system_pods.go:89] "etcd-no-preload-882483" [2cc20a64-e788-4bd3-99ae-22f5e88054c6] Running
	I1120 21:11:40.593319  213043 system_pods.go:89] "kindnet-jr57n" [43754294-4619-410a-9cf0-01baa9df142e] Running
	I1120 21:11:40.593324  213043 system_pods.go:89] "kube-apiserver-no-preload-882483" [e355b3e2-5d70-4e01-b443-2e5c756584db] Running
	I1120 21:11:40.593329  213043 system_pods.go:89] "kube-controller-manager-no-preload-882483" [b7f3c12d-8398-4a69-8399-1c21d54c7624] Running
	I1120 21:11:40.593333  213043 system_pods.go:89] "kube-proxy-n9cg7" [77a3defc-bd58-414c-9c2a-bf750429a720] Running
	I1120 21:11:40.593338  213043 system_pods.go:89] "kube-scheduler-no-preload-882483" [40685019-a9a4-4a4a-9e42-58d2a9f39f9b] Running
	I1120 21:11:40.593344  213043 system_pods.go:89] "storage-provisioner" [1698ab50-608c-439f-b6de-81323e57d2c8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:11:40.593365  213043 retry.go:31] will retry after 415.468389ms: missing components: kube-dns
	I1120 21:11:41.014146  213043 system_pods.go:86] 8 kube-system pods found
	I1120 21:11:41.014181  213043 system_pods.go:89] "coredns-66bc5c9577-kbl4d" [7e90701b-e158-4e32-b311-ef635af8eec0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:11:41.014188  213043 system_pods.go:89] "etcd-no-preload-882483" [2cc20a64-e788-4bd3-99ae-22f5e88054c6] Running
	I1120 21:11:41.014194  213043 system_pods.go:89] "kindnet-jr57n" [43754294-4619-410a-9cf0-01baa9df142e] Running
	I1120 21:11:41.014204  213043 system_pods.go:89] "kube-apiserver-no-preload-882483" [e355b3e2-5d70-4e01-b443-2e5c756584db] Running
	I1120 21:11:41.014210  213043 system_pods.go:89] "kube-controller-manager-no-preload-882483" [b7f3c12d-8398-4a69-8399-1c21d54c7624] Running
	I1120 21:11:41.014214  213043 system_pods.go:89] "kube-proxy-n9cg7" [77a3defc-bd58-414c-9c2a-bf750429a720] Running
	I1120 21:11:41.014219  213043 system_pods.go:89] "kube-scheduler-no-preload-882483" [40685019-a9a4-4a4a-9e42-58d2a9f39f9b] Running
	I1120 21:11:41.014230  213043 system_pods.go:89] "storage-provisioner" [1698ab50-608c-439f-b6de-81323e57d2c8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:11:41.014248  213043 retry.go:31] will retry after 444.646673ms: missing components: kube-dns
	I1120 21:11:41.463176  213043 system_pods.go:86] 8 kube-system pods found
	I1120 21:11:41.463210  213043 system_pods.go:89] "coredns-66bc5c9577-kbl4d" [7e90701b-e158-4e32-b311-ef635af8eec0] Running
	I1120 21:11:41.463217  213043 system_pods.go:89] "etcd-no-preload-882483" [2cc20a64-e788-4bd3-99ae-22f5e88054c6] Running
	I1120 21:11:41.463222  213043 system_pods.go:89] "kindnet-jr57n" [43754294-4619-410a-9cf0-01baa9df142e] Running
	I1120 21:11:41.463231  213043 system_pods.go:89] "kube-apiserver-no-preload-882483" [e355b3e2-5d70-4e01-b443-2e5c756584db] Running
	I1120 21:11:41.463236  213043 system_pods.go:89] "kube-controller-manager-no-preload-882483" [b7f3c12d-8398-4a69-8399-1c21d54c7624] Running
	I1120 21:11:41.463241  213043 system_pods.go:89] "kube-proxy-n9cg7" [77a3defc-bd58-414c-9c2a-bf750429a720] Running
	I1120 21:11:41.463245  213043 system_pods.go:89] "kube-scheduler-no-preload-882483" [40685019-a9a4-4a4a-9e42-58d2a9f39f9b] Running
	I1120 21:11:41.463248  213043 system_pods.go:89] "storage-provisioner" [1698ab50-608c-439f-b6de-81323e57d2c8] Running
	I1120 21:11:41.463256  213043 system_pods.go:126] duration metric: took 1.489088106s to wait for k8s-apps to be running ...
	I1120 21:11:41.463815  213043 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:11:41.463897  213043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:11:41.478805  213043 system_svc.go:56] duration metric: took 15.533244ms WaitForService to wait for kubelet
	I1120 21:11:41.478831  213043 kubeadm.go:587] duration metric: took 15.191384736s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:11:41.478850  213043 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:11:41.481776  213043 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1120 21:11:41.481816  213043 node_conditions.go:123] node cpu capacity is 2
	I1120 21:11:41.481829  213043 node_conditions.go:105] duration metric: took 2.972812ms to run NodePressure ...
	I1120 21:11:41.481842  213043 start.go:242] waiting for startup goroutines ...
	I1120 21:11:41.481854  213043 start.go:247] waiting for cluster config update ...
	I1120 21:11:41.481868  213043 start.go:256] writing updated cluster config ...
	I1120 21:11:41.482192  213043 ssh_runner.go:195] Run: rm -f paused
	I1120 21:11:41.487575  213043 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:11:41.491383  213043 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kbl4d" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:11:41.496770  213043 pod_ready.go:94] pod "coredns-66bc5c9577-kbl4d" is "Ready"
	I1120 21:11:41.496795  213043 pod_ready.go:86] duration metric: took 5.33932ms for pod "coredns-66bc5c9577-kbl4d" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:11:41.499177  213043 pod_ready.go:83] waiting for pod "etcd-no-preload-882483" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:11:41.504084  213043 pod_ready.go:94] pod "etcd-no-preload-882483" is "Ready"
	I1120 21:11:41.504111  213043 pod_ready.go:86] duration metric: took 4.906837ms for pod "etcd-no-preload-882483" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:11:41.506620  213043 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-882483" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:11:41.511433  213043 pod_ready.go:94] pod "kube-apiserver-no-preload-882483" is "Ready"
	I1120 21:11:41.511459  213043 pod_ready.go:86] duration metric: took 4.811968ms for pod "kube-apiserver-no-preload-882483" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:11:41.513936  213043 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-882483" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:11:41.892092  213043 pod_ready.go:94] pod "kube-controller-manager-no-preload-882483" is "Ready"
	I1120 21:11:41.892121  213043 pod_ready.go:86] duration metric: took 378.162369ms for pod "kube-controller-manager-no-preload-882483" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:11:42.092905  213043 pod_ready.go:83] waiting for pod "kube-proxy-n9cg7" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:11:42.491797  213043 pod_ready.go:94] pod "kube-proxy-n9cg7" is "Ready"
	I1120 21:11:42.491827  213043 pod_ready.go:86] duration metric: took 398.890514ms for pod "kube-proxy-n9cg7" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:11:42.692310  213043 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-882483" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:11:43.091974  213043 pod_ready.go:94] pod "kube-scheduler-no-preload-882483" is "Ready"
	I1120 21:11:43.092006  213043 pod_ready.go:86] duration metric: took 399.653499ms for pod "kube-scheduler-no-preload-882483" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:11:43.092019  213043 pod_ready.go:40] duration metric: took 1.604411611s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:11:43.166546  213043 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1120 21:11:43.168303  213043 out.go:179] * Done! kubectl is now configured to use "no-preload-882483" cluster and "default" namespace by default
	W1120 21:11:43.907939  215319 node_ready.go:57] node "embed-certs-121127" has "Ready":"False" status (will retry)
	W1120 21:11:45.908969  215319 node_ready.go:57] node "embed-certs-121127" has "Ready":"False" status (will retry)
	W1120 21:11:48.411380  215319 node_ready.go:57] node "embed-certs-121127" has "Ready":"False" status (will retry)
	W1120 21:11:50.908333  215319 node_ready.go:57] node "embed-certs-121127" has "Ready":"False" status (will retry)
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3332ff430ea2e       1611cd07b61d5       10 seconds ago      Running             busybox                   0                   35e5c6836ebe6       busybox                                     default
	8ba1448c5208e       138784d87c9c5       15 seconds ago      Running             coredns                   0                   ac398ec19623a       coredns-66bc5c9577-kbl4d                    kube-system
	ac1d82c386c0d       66749159455b3       15 seconds ago      Running             storage-provisioner       0                   53d0cc536e5d7       storage-provisioner                         kube-system
	6ae51b7304d17       b1a8c6f707935       26 seconds ago      Running             kindnet-cni               0                   f8231ead75fef       kindnet-jr57n                               kube-system
	f3a97777b67b3       05baa95f5142d       29 seconds ago      Running             kube-proxy                0                   a042a50bc5134       kube-proxy-n9cg7                            kube-system
	a08eea19da810       b5f57ec6b9867       47 seconds ago      Running             kube-scheduler            0                   3f085b80662ac       kube-scheduler-no-preload-882483            kube-system
	1d98f7949f8d4       7eb2c6ff0c5a7       47 seconds ago      Running             kube-controller-manager   0                   dff427621068f       kube-controller-manager-no-preload-882483   kube-system
	680c56dfb4909       a1894772a478e       47 seconds ago      Running             etcd                      0                   fd0e4ce635277       etcd-no-preload-882483                      kube-system
	ff050fee197a6       43911e833d64d       47 seconds ago      Running             kube-apiserver            0                   1498cf02d9a01       kube-apiserver-no-preload-882483            kube-system
	
	
	==> containerd <==
	Nov 20 21:11:40 no-preload-882483 containerd[756]: time="2025-11-20T21:11:40.384941202Z" level=info msg="connecting to shim ac1d82c386c0dd060910eebb103d9a8ec94f7a984f33fd70ccfd4c5757297c5a" address="unix:///run/containerd/s/837b1dad384941854c9301a2342d31b629c980d2071399c4d6de8f94aafa53cc" protocol=ttrpc version=3
	Nov 20 21:11:40 no-preload-882483 containerd[756]: time="2025-11-20T21:11:40.428360748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kbl4d,Uid:7e90701b-e158-4e32-b311-ef635af8eec0,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac398ec19623a98630af63b6d40ace684172f279e130305031a3d4df61854159\""
	Nov 20 21:11:40 no-preload-882483 containerd[756]: time="2025-11-20T21:11:40.437483530Z" level=info msg="CreateContainer within sandbox \"ac398ec19623a98630af63b6d40ace684172f279e130305031a3d4df61854159\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 20 21:11:40 no-preload-882483 containerd[756]: time="2025-11-20T21:11:40.445141776Z" level=info msg="Container 8ba1448c5208ed35c1cd3fe0d2684eedb395466127060c252bfb7797b5f59ca6: CDI devices from CRI Config.CDIDevices: []"
	Nov 20 21:11:40 no-preload-882483 containerd[756]: time="2025-11-20T21:11:40.454312673Z" level=info msg="CreateContainer within sandbox \"ac398ec19623a98630af63b6d40ace684172f279e130305031a3d4df61854159\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8ba1448c5208ed35c1cd3fe0d2684eedb395466127060c252bfb7797b5f59ca6\""
	Nov 20 21:11:40 no-preload-882483 containerd[756]: time="2025-11-20T21:11:40.457151379Z" level=info msg="StartContainer for \"8ba1448c5208ed35c1cd3fe0d2684eedb395466127060c252bfb7797b5f59ca6\""
	Nov 20 21:11:40 no-preload-882483 containerd[756]: time="2025-11-20T21:11:40.458258656Z" level=info msg="connecting to shim 8ba1448c5208ed35c1cd3fe0d2684eedb395466127060c252bfb7797b5f59ca6" address="unix:///run/containerd/s/a9dcb32955697df086aec876cf871ee9606522f219c92fe39f95dfe16e76ad0a" protocol=ttrpc version=3
	Nov 20 21:11:40 no-preload-882483 containerd[756]: time="2025-11-20T21:11:40.509002165Z" level=info msg="StartContainer for \"ac1d82c386c0dd060910eebb103d9a8ec94f7a984f33fd70ccfd4c5757297c5a\" returns successfully"
	Nov 20 21:11:40 no-preload-882483 containerd[756]: time="2025-11-20T21:11:40.573710061Z" level=info msg="StartContainer for \"8ba1448c5208ed35c1cd3fe0d2684eedb395466127060c252bfb7797b5f59ca6\" returns successfully"
	Nov 20 21:11:43 no-preload-882483 containerd[756]: time="2025-11-20T21:11:43.731959628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:3914dd0d-f188-4b9a-8dd2-72c422726597,Namespace:default,Attempt:0,}"
	Nov 20 21:11:43 no-preload-882483 containerd[756]: time="2025-11-20T21:11:43.777589429Z" level=info msg="connecting to shim 35e5c6836ebe678994871e00fef5b39f4cbff589ce84acd3dd8c33e0591d6623" address="unix:///run/containerd/s/a3dec73619f134ef466677d948be5f86a32022b64e0677e95eb805ce4b1efab8" namespace=k8s.io protocol=ttrpc version=3
	Nov 20 21:11:43 no-preload-882483 containerd[756]: time="2025-11-20T21:11:43.847369614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:3914dd0d-f188-4b9a-8dd2-72c422726597,Namespace:default,Attempt:0,} returns sandbox id \"35e5c6836ebe678994871e00fef5b39f4cbff589ce84acd3dd8c33e0591d6623\""
	Nov 20 21:11:43 no-preload-882483 containerd[756]: time="2025-11-20T21:11:43.851209084Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 20 21:11:45 no-preload-882483 containerd[756]: time="2025-11-20T21:11:45.991991240Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 21:11:45 no-preload-882483 containerd[756]: time="2025-11-20T21:11:45.992969604Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937185"
	Nov 20 21:11:45 no-preload-882483 containerd[756]: time="2025-11-20T21:11:45.994150713Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 21:11:45 no-preload-882483 containerd[756]: time="2025-11-20T21:11:45.996926139Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 21:11:45 no-preload-882483 containerd[756]: time="2025-11-20T21:11:45.997590340Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.146332147s"
	Nov 20 21:11:45 no-preload-882483 containerd[756]: time="2025-11-20T21:11:45.997703696Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 20 21:11:46 no-preload-882483 containerd[756]: time="2025-11-20T21:11:46.004171332Z" level=info msg="CreateContainer within sandbox \"35e5c6836ebe678994871e00fef5b39f4cbff589ce84acd3dd8c33e0591d6623\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 20 21:11:46 no-preload-882483 containerd[756]: time="2025-11-20T21:11:46.011437777Z" level=info msg="Container 3332ff430ea2e3074b03bf83a07d619422acf6aa526c1d23eec70772cb72f175: CDI devices from CRI Config.CDIDevices: []"
	Nov 20 21:11:46 no-preload-882483 containerd[756]: time="2025-11-20T21:11:46.034617132Z" level=info msg="CreateContainer within sandbox \"35e5c6836ebe678994871e00fef5b39f4cbff589ce84acd3dd8c33e0591d6623\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"3332ff430ea2e3074b03bf83a07d619422acf6aa526c1d23eec70772cb72f175\""
	Nov 20 21:11:46 no-preload-882483 containerd[756]: time="2025-11-20T21:11:46.037995900Z" level=info msg="StartContainer for \"3332ff430ea2e3074b03bf83a07d619422acf6aa526c1d23eec70772cb72f175\""
	Nov 20 21:11:46 no-preload-882483 containerd[756]: time="2025-11-20T21:11:46.040460921Z" level=info msg="connecting to shim 3332ff430ea2e3074b03bf83a07d619422acf6aa526c1d23eec70772cb72f175" address="unix:///run/containerd/s/a3dec73619f134ef466677d948be5f86a32022b64e0677e95eb805ce4b1efab8" protocol=ttrpc version=3
	Nov 20 21:11:46 no-preload-882483 containerd[756]: time="2025-11-20T21:11:46.134322525Z" level=info msg="StartContainer for \"3332ff430ea2e3074b03bf83a07d619422acf6aa526c1d23eec70772cb72f175\" returns successfully"
	
	
	==> coredns [8ba1448c5208ed35c1cd3fe0d2684eedb395466127060c252bfb7797b5f59ca6] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43155 - 29135 "HINFO IN 7457519116205105251.4743579555540716560. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020670895s
	
	
	==> describe nodes <==
	Name:               no-preload-882483
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-882483
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=no-preload-882483
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_11_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:11:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-882483
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:11:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:11:51 +0000   Thu, 20 Nov 2025 21:11:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:11:51 +0000   Thu, 20 Nov 2025 21:11:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:11:51 +0000   Thu, 20 Nov 2025 21:11:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:11:51 +0000   Thu, 20 Nov 2025 21:11:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-882483
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                3ba0793b-34b4-41b6-b5b2-549aaf1b0ffc
	  Boot ID:                    0cc3a06a-788d-45d4-8fff-2131330a9ee0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-66bc5c9577-kbl4d                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     30s
	  kube-system                 etcd-no-preload-882483                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         35s
	  kube-system                 kindnet-jr57n                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-no-preload-882483             250m (12%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-no-preload-882483    200m (10%)    0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-proxy-n9cg7                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-no-preload-882483             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Warning  CgroupV1                 49s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  49s (x8 over 49s)  kubelet          Node no-preload-882483 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    49s (x8 over 49s)  kubelet          Node no-preload-882483 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     49s (x7 over 49s)  kubelet          Node no-preload-882483 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  49s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 36s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 36s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  35s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node no-preload-882483 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node no-preload-882483 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     35s                kubelet          Node no-preload-882483 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           31s                node-controller  Node no-preload-882483 event: Registered Node no-preload-882483 in Controller
	  Normal   NodeReady                17s                kubelet          Node no-preload-882483 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov20 20:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014399] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.498138] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.765613] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.782554] kauditd_printk_skb: 36 callbacks suppressed
	[Nov20 20:40] hrtimer: interrupt took 1888672 ns
	
	
	==> etcd [680c56dfb49094937510bccdd6d9cde462c90e51b1a41a672c85e8160ca93ca1] <==
	{"level":"warn","ts":"2025-11-20T21:11:15.081553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.102776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.138030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.213432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.249425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.296059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.411545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.458729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.502767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.543291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.581865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.707633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.722537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.773289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.784361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.844891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.940032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.945816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:15.992857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:16.049315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:16.074985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:16.105411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:16.163176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:16.256769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:16.545560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48002","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:11:56 up 54 min,  0 user,  load average: 3.54, 3.31, 2.85
	Linux no-preload-882483 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6ae51b7304d17b3fff94baae871c8f4a1af4bacafee33361038b139626b00d12] <==
	I1120 21:11:29.528699       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 21:11:29.619151       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1120 21:11:29.619361       1 main.go:148] setting mtu 1500 for CNI 
	I1120 21:11:29.619384       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 21:11:29.619401       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T21:11:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 21:11:29.824136       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 21:11:29.824329       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 21:11:29.919286       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 21:11:29.919579       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1120 21:11:30.419490       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 21:11:30.419752       1 metrics.go:72] Registering metrics
	I1120 21:11:30.420061       1 controller.go:711] "Syncing nftables rules"
	I1120 21:11:39.829242       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 21:11:39.829307       1 main.go:301] handling current node
	I1120 21:11:49.822527       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 21:11:49.822564       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ff050fee197a6920f29180dac6d4c1f8f4db987e76a6f6cfff1a6c0a017071ec] <==
	I1120 21:11:18.552091       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1120 21:11:18.552247       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1120 21:11:18.584221       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:11:18.584533       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1120 21:11:18.646891       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:11:18.647080       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 21:11:18.756889       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:11:18.986459       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1120 21:11:19.001848       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1120 21:11:19.002060       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 21:11:19.986978       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 21:11:20.067821       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 21:11:20.174391       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1120 21:11:20.198325       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1120 21:11:20.199816       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 21:11:20.212826       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 21:11:20.402005       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 21:11:21.009116       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 21:11:21.059363       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1120 21:11:21.077319       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 21:11:25.608435       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:11:25.615335       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:11:25.906768       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 21:11:26.506951       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1120 21:11:52.620352       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:55244: use of closed network connection
	
	
	==> kube-controller-manager [1d98f7949f8d446666313ef7b81cfc3ced91f03248f87f9d0926e5d14a16e359] <==
	I1120 21:11:25.451324       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1120 21:11:25.451620       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1120 21:11:25.452132       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1120 21:11:25.452754       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-882483" podCIDRs=["10.244.0.0/24"]
	I1120 21:11:25.453120       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 21:11:25.450101       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1120 21:11:25.453354       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1120 21:11:25.453458       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-882483"
	I1120 21:11:25.453535       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1120 21:11:25.450121       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1120 21:11:25.450148       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1120 21:11:25.454300       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1120 21:11:25.454355       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1120 21:11:25.456072       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1120 21:11:25.457799       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1120 21:11:25.461759       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:11:25.463915       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:11:25.495447       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1120 21:11:25.498797       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1120 21:11:25.498968       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 21:11:25.501127       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 21:11:25.501138       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 21:11:25.501559       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 21:11:25.506226       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:11:40.456478       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [f3a97777b67b3798e8d95e3392bdd5d7980ea5e430bce8a928d0f4efe5223a57] <==
	I1120 21:11:27.529305       1 server_linux.go:53] "Using iptables proxy"
	I1120 21:11:27.612514       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 21:11:27.714867       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 21:11:27.714913       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1120 21:11:27.715115       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 21:11:27.734126       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:11:27.734186       1 server_linux.go:132] "Using iptables Proxier"
	I1120 21:11:27.738138       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 21:11:27.738726       1 server.go:527] "Version info" version="v1.34.1"
	I1120 21:11:27.738752       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:11:27.742093       1 config.go:106] "Starting endpoint slice config controller"
	I1120 21:11:27.742297       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 21:11:27.742797       1 config.go:200] "Starting service config controller"
	I1120 21:11:27.743094       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 21:11:27.744889       1 config.go:309] "Starting node config controller"
	I1120 21:11:27.744909       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 21:11:27.744917       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 21:11:27.745442       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 21:11:27.745460       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 21:11:27.842792       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 21:11:27.843928       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 21:11:27.845519       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [a08eea19da810b62a0878817a24a10b282995d1596e74e0b8a2c3bb031d8d573] <==
	E1120 21:11:18.501327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 21:11:18.501731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 21:11:18.501799       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 21:11:18.501873       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 21:11:18.501916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 21:11:18.501943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 21:11:18.502837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 21:11:18.502888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 21:11:18.502945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 21:11:18.502996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 21:11:18.503044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 21:11:18.503073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 21:11:19.339579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 21:11:19.394144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 21:11:19.422823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 21:11:19.471286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 21:11:19.564102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 21:11:19.587560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 21:11:19.623227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 21:11:19.631289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 21:11:19.649584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 21:11:19.660223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1120 21:11:19.713140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 21:11:19.720651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1120 21:11:21.734725       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 21:11:22 no-preload-882483 kubelet[2190]: I1120 21:11:22.099887    2190 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-no-preload-882483"
	Nov 20 21:11:22 no-preload-882483 kubelet[2190]: E1120 21:11:22.120445    2190 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-no-preload-882483\" already exists" pod="kube-system/kube-scheduler-no-preload-882483"
	Nov 20 21:11:22 no-preload-882483 kubelet[2190]: I1120 21:11:22.136821    2190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-882483" podStartSLOduration=1.136799669 podStartE2EDuration="1.136799669s" podCreationTimestamp="2025-11-20 21:11:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:11:22.121610073 +0000 UTC m=+1.250319770" watchObservedRunningTime="2025-11-20 21:11:22.136799669 +0000 UTC m=+1.265509358"
	Nov 20 21:11:22 no-preload-882483 kubelet[2190]: I1120 21:11:22.162094    2190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-882483" podStartSLOduration=1.162074788 podStartE2EDuration="1.162074788s" podCreationTimestamp="2025-11-20 21:11:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:11:22.138047388 +0000 UTC m=+1.266757069" watchObservedRunningTime="2025-11-20 21:11:22.162074788 +0000 UTC m=+1.290784469"
	Nov 20 21:11:25 no-preload-882483 kubelet[2190]: I1120 21:11:25.541540    2190 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 20 21:11:25 no-preload-882483 kubelet[2190]: I1120 21:11:25.543047    2190 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 20 21:11:26 no-preload-882483 kubelet[2190]: I1120 21:11:26.632427    2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77a3defc-bd58-414c-9c2a-bf750429a720-xtables-lock\") pod \"kube-proxy-n9cg7\" (UID: \"77a3defc-bd58-414c-9c2a-bf750429a720\") " pod="kube-system/kube-proxy-n9cg7"
	Nov 20 21:11:26 no-preload-882483 kubelet[2190]: I1120 21:11:26.632483    2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43754294-4619-410a-9cf0-01baa9df142e-xtables-lock\") pod \"kindnet-jr57n\" (UID: \"43754294-4619-410a-9cf0-01baa9df142e\") " pod="kube-system/kindnet-jr57n"
	Nov 20 21:11:26 no-preload-882483 kubelet[2190]: I1120 21:11:26.632508    2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/77a3defc-bd58-414c-9c2a-bf750429a720-kube-proxy\") pod \"kube-proxy-n9cg7\" (UID: \"77a3defc-bd58-414c-9c2a-bf750429a720\") " pod="kube-system/kube-proxy-n9cg7"
	Nov 20 21:11:26 no-preload-882483 kubelet[2190]: I1120 21:11:26.632539    2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77a3defc-bd58-414c-9c2a-bf750429a720-lib-modules\") pod \"kube-proxy-n9cg7\" (UID: \"77a3defc-bd58-414c-9c2a-bf750429a720\") " pod="kube-system/kube-proxy-n9cg7"
	Nov 20 21:11:26 no-preload-882483 kubelet[2190]: I1120 21:11:26.632558    2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrfnb\" (UniqueName: \"kubernetes.io/projected/77a3defc-bd58-414c-9c2a-bf750429a720-kube-api-access-hrfnb\") pod \"kube-proxy-n9cg7\" (UID: \"77a3defc-bd58-414c-9c2a-bf750429a720\") " pod="kube-system/kube-proxy-n9cg7"
	Nov 20 21:11:26 no-preload-882483 kubelet[2190]: I1120 21:11:26.632657    2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/43754294-4619-410a-9cf0-01baa9df142e-cni-cfg\") pod \"kindnet-jr57n\" (UID: \"43754294-4619-410a-9cf0-01baa9df142e\") " pod="kube-system/kindnet-jr57n"
	Nov 20 21:11:26 no-preload-882483 kubelet[2190]: I1120 21:11:26.632714    2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43754294-4619-410a-9cf0-01baa9df142e-lib-modules\") pod \"kindnet-jr57n\" (UID: \"43754294-4619-410a-9cf0-01baa9df142e\") " pod="kube-system/kindnet-jr57n"
	Nov 20 21:11:26 no-preload-882483 kubelet[2190]: I1120 21:11:26.632730    2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74jgk\" (UniqueName: \"kubernetes.io/projected/43754294-4619-410a-9cf0-01baa9df142e-kube-api-access-74jgk\") pod \"kindnet-jr57n\" (UID: \"43754294-4619-410a-9cf0-01baa9df142e\") " pod="kube-system/kindnet-jr57n"
	Nov 20 21:11:26 no-preload-882483 kubelet[2190]: I1120 21:11:26.768382    2190 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 20 21:11:28 no-preload-882483 kubelet[2190]: I1120 21:11:28.149267    2190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n9cg7" podStartSLOduration=2.149223963 podStartE2EDuration="2.149223963s" podCreationTimestamp="2025-11-20 21:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:11:28.148883182 +0000 UTC m=+7.277592871" watchObservedRunningTime="2025-11-20 21:11:28.149223963 +0000 UTC m=+7.277933644"
	Nov 20 21:11:39 no-preload-882483 kubelet[2190]: I1120 21:11:39.887023    2190 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 20 21:11:39 no-preload-882483 kubelet[2190]: I1120 21:11:39.923091    2190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-jr57n" podStartSLOduration=11.916044724 podStartE2EDuration="13.923073445s" podCreationTimestamp="2025-11-20 21:11:26 +0000 UTC" firstStartedPulling="2025-11-20 21:11:27.259885095 +0000 UTC m=+6.388594776" lastFinishedPulling="2025-11-20 21:11:29.266913816 +0000 UTC m=+8.395623497" observedRunningTime="2025-11-20 21:11:30.165306592 +0000 UTC m=+9.294016273" watchObservedRunningTime="2025-11-20 21:11:39.923073445 +0000 UTC m=+19.051783126"
	Nov 20 21:11:39 no-preload-882483 kubelet[2190]: I1120 21:11:39.940547    2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1698ab50-608c-439f-b6de-81323e57d2c8-tmp\") pod \"storage-provisioner\" (UID: \"1698ab50-608c-439f-b6de-81323e57d2c8\") " pod="kube-system/storage-provisioner"
	Nov 20 21:11:39 no-preload-882483 kubelet[2190]: I1120 21:11:39.940625    2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz66s\" (UniqueName: \"kubernetes.io/projected/1698ab50-608c-439f-b6de-81323e57d2c8-kube-api-access-pz66s\") pod \"storage-provisioner\" (UID: \"1698ab50-608c-439f-b6de-81323e57d2c8\") " pod="kube-system/storage-provisioner"
	Nov 20 21:11:40 no-preload-882483 kubelet[2190]: I1120 21:11:40.043771    2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e90701b-e158-4e32-b311-ef635af8eec0-config-volume\") pod \"coredns-66bc5c9577-kbl4d\" (UID: \"7e90701b-e158-4e32-b311-ef635af8eec0\") " pod="kube-system/coredns-66bc5c9577-kbl4d"
	Nov 20 21:11:40 no-preload-882483 kubelet[2190]: I1120 21:11:40.044031    2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rws28\" (UniqueName: \"kubernetes.io/projected/7e90701b-e158-4e32-b311-ef635af8eec0-kube-api-access-rws28\") pod \"coredns-66bc5c9577-kbl4d\" (UID: \"7e90701b-e158-4e32-b311-ef635af8eec0\") " pod="kube-system/coredns-66bc5c9577-kbl4d"
	Nov 20 21:11:41 no-preload-882483 kubelet[2190]: I1120 21:11:41.211383    2190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-kbl4d" podStartSLOduration=15.211364053 podStartE2EDuration="15.211364053s" podCreationTimestamp="2025-11-20 21:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:11:41.197445025 +0000 UTC m=+20.326154722" watchObservedRunningTime="2025-11-20 21:11:41.211364053 +0000 UTC m=+20.340073742"
	Nov 20 21:11:41 no-preload-882483 kubelet[2190]: I1120 21:11:41.211533    2190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.211526599 podStartE2EDuration="14.211526599s" podCreationTimestamp="2025-11-20 21:11:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:11:41.210866599 +0000 UTC m=+20.339576305" watchObservedRunningTime="2025-11-20 21:11:41.211526599 +0000 UTC m=+20.340236288"
	Nov 20 21:11:43 no-preload-882483 kubelet[2190]: I1120 21:11:43.471451    2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffprn\" (UniqueName: \"kubernetes.io/projected/3914dd0d-f188-4b9a-8dd2-72c422726597-kube-api-access-ffprn\") pod \"busybox\" (UID: \"3914dd0d-f188-4b9a-8dd2-72c422726597\") " pod="default/busybox"
	
	
	==> storage-provisioner [ac1d82c386c0dd060910eebb103d9a8ec94f7a984f33fd70ccfd4c5757297c5a] <==
	I1120 21:11:40.627655       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1120 21:11:40.631587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:40.653487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 21:11:40.654833       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 21:11:40.655198       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-882483_7ab555a1-1d59-44be-9ed9-d3982c29f190!
	I1120 21:11:40.656970       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d6e59f04-25e3-468b-be2a-acd42c0d8ce9", APIVersion:"v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-882483_7ab555a1-1d59-44be-9ed9-d3982c29f190 became leader
	W1120 21:11:40.660976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:40.668888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 21:11:40.756331       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-882483_7ab555a1-1d59-44be-9ed9-d3982c29f190!
	W1120 21:11:42.672417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:42.677961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:44.687452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:44.693167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:46.696595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:46.702149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:48.705360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:48.710790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:50.714207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:50.721668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:52.726721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:52.734927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:54.738582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:54.749570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:56.752699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:56.760199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-882483 -n no-preload-882483
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-882483 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (14.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (14.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-121127 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [25b27561-eeaa-45d2-b437-daad5f809cda] Pending
helpers_test.go:352: "busybox" [25b27561-eeaa-45d2-b437-daad5f809cda] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [25b27561-eeaa-45d2-b437-daad5f809cda] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003485301s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-121127 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-121127
helpers_test.go:243: (dbg) docker inspect embed-certs-121127:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1e01af2d673aea0d9b723a2647bf1e9e391fc200c0895ee433b7e796ab7ba8ff",
	        "Created": "2025-11-20T21:10:39.53580006Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 216423,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:10:39.61711974Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/1e01af2d673aea0d9b723a2647bf1e9e391fc200c0895ee433b7e796ab7ba8ff/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1e01af2d673aea0d9b723a2647bf1e9e391fc200c0895ee433b7e796ab7ba8ff/hostname",
	        "HostsPath": "/var/lib/docker/containers/1e01af2d673aea0d9b723a2647bf1e9e391fc200c0895ee433b7e796ab7ba8ff/hosts",
	        "LogPath": "/var/lib/docker/containers/1e01af2d673aea0d9b723a2647bf1e9e391fc200c0895ee433b7e796ab7ba8ff/1e01af2d673aea0d9b723a2647bf1e9e391fc200c0895ee433b7e796ab7ba8ff-json.log",
	        "Name": "/embed-certs-121127",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-121127:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-121127",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1e01af2d673aea0d9b723a2647bf1e9e391fc200c0895ee433b7e796ab7ba8ff",
	                "LowerDir": "/var/lib/docker/overlay2/b509bc5c46a5e5b602aabcf6bed43ceb3128c699036cc161ce02303780625a39-init/diff:/var/lib/docker/overlay2/5105da773b59b243b777c3c083d206b6a741bd11ebc5a0283799917fe36ebbb2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b509bc5c46a5e5b602aabcf6bed43ceb3128c699036cc161ce02303780625a39/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b509bc5c46a5e5b602aabcf6bed43ceb3128c699036cc161ce02303780625a39/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b509bc5c46a5e5b602aabcf6bed43ceb3128c699036cc161ce02303780625a39/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-121127",
	                "Source": "/var/lib/docker/volumes/embed-certs-121127/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-121127",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-121127",
	                "name.minikube.sigs.k8s.io": "embed-certs-121127",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2b883e1e0c90cfa875ea3a2b30a52f2c81b3f194aeb6a53564ddcfeafbd3aaf0",
	            "SandboxKey": "/var/run/docker/netns/2b883e1e0c90",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-121127": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:22:aa:02:a3:6e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bbac06c0462da8b50eaaaee1c67cbdbf5ee119e8c368ef8ccec363fe3a0deee0",
	                    "EndpointID": "61cadc264e16494596cca35b6c2c2024186f05d928c7865ca3263765cd3d68c7",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-121127",
	                        "1e01af2d673a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-121127 -n embed-certs-121127
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-121127 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-121127 logs -n 25: (1.706162766s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ start   │ -p force-systemd-env-444240 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-444240 │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │ 20 Nov 25 21:07 UTC │
	│ start   │ -p cert-expiration-339813 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-339813   │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │ 20 Nov 25 21:07 UTC │
	│ ssh     │ force-systemd-env-444240 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-444240 │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ delete  │ -p force-systemd-env-444240                                                                                                                                                                                                                         │ force-systemd-env-444240 │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ start   │ -p cert-options-530158 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-530158      │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ ssh     │ cert-options-530158 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-530158      │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ ssh     │ -p cert-options-530158 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-530158      │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ delete  │ -p cert-options-530158                                                                                                                                                                                                                              │ cert-options-530158      │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ start   │ -p old-k8s-version-023521 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-023521   │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:08 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-023521 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-023521   │ jenkins │ v1.37.0 │ 20 Nov 25 21:09 UTC │ 20 Nov 25 21:09 UTC │
	│ stop    │ -p old-k8s-version-023521 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-023521   │ jenkins │ v1.37.0 │ 20 Nov 25 21:09 UTC │ 20 Nov 25 21:09 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-023521 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-023521   │ jenkins │ v1.37.0 │ 20 Nov 25 21:09 UTC │ 20 Nov 25 21:09 UTC │
	│ start   │ -p old-k8s-version-023521 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-023521   │ jenkins │ v1.37.0 │ 20 Nov 25 21:09 UTC │ 20 Nov 25 21:10 UTC │
	│ start   │ -p cert-expiration-339813 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-339813   │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ delete  │ -p cert-expiration-339813                                                                                                                                                                                                                           │ cert-expiration-339813   │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ image   │ old-k8s-version-023521 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-023521   │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ start   │ -p no-preload-882483 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-882483        │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:11 UTC │
	│ pause   │ -p old-k8s-version-023521 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-023521   │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ unpause │ -p old-k8s-version-023521 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-023521   │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ delete  │ -p old-k8s-version-023521                                                                                                                                                                                                                           │ old-k8s-version-023521   │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ delete  │ -p old-k8s-version-023521                                                                                                                                                                                                                           │ old-k8s-version-023521   │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ start   │ -p embed-certs-121127 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-121127       │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:12 UTC │
	│ addons  │ enable metrics-server -p no-preload-882483 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-882483        │ jenkins │ v1.37.0 │ 20 Nov 25 21:11 UTC │ 20 Nov 25 21:11 UTC │
	│ stop    │ -p no-preload-882483 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-882483        │ jenkins │ v1.37.0 │ 20 Nov 25 21:11 UTC │ 20 Nov 25 21:12 UTC │
	│ addons  │ enable dashboard -p no-preload-882483 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-882483        │ jenkins │ v1.37.0 │ 20 Nov 25 21:12 UTC │ 20 Nov 25 21:12 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:12:11
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:12:11.632610  222253 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:12:11.632799  222253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:12:11.632825  222253 out.go:374] Setting ErrFile to fd 2...
	I1120 21:12:11.632846  222253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:12:11.633592  222253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-2300/.minikube/bin
	I1120 21:12:11.634910  222253 out.go:368] Setting JSON to false
	I1120 21:12:11.636125  222253 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3281,"bootTime":1763669851,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1120 21:12:11.636213  222253 start.go:143] virtualization:  
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	162e6ca6e027d       1611cd07b61d5       7 seconds ago        Running             busybox                   0                   b68911ae2a843       busybox                                      default
	c36018563f9d2       ba04bb24b9575       12 seconds ago       Running             storage-provisioner       0                   a01920b2dc30a       storage-provisioner                          kube-system
	83b99b1db5fa6       138784d87c9c5       13 seconds ago       Running             coredns                   0                   0eaad59ebfd2d       coredns-66bc5c9577-n27nb                     kube-system
	fc9aba71d2405       b1a8c6f707935       54 seconds ago       Running             kindnet-cni               0                   7287147637b36       kindnet-v9ltq                                kube-system
	64ed1b418fe60       05baa95f5142d       54 seconds ago       Running             kube-proxy                0                   fd05db422d173       kube-proxy-cwvzr                             kube-system
	e6dcdd9202998       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   c56719205c082       kube-scheduler-embed-certs-121127            kube-system
	320335813f083       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   4f175f07d0408       kube-apiserver-embed-certs-121127            kube-system
	0bbf905a025ff       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   7f8f085e427f2       kube-controller-manager-embed-certs-121127   kube-system
	b62a326059043       a1894772a478e       About a minute ago   Running             etcd                      0                   d0e238102e598       etcd-embed-certs-121127                      kube-system
	
	
	==> containerd <==
	Nov 20 21:11:59 embed-certs-121127 containerd[760]: time="2025-11-20T21:11:59.400745380Z" level=info msg="CreateContainer within sandbox \"0eaad59ebfd2d15d23b4e2634037ea62eaba9ef7dab4d648f29363f99d7ab2c7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"83b99b1db5fa6379ed83cc78074fa129b8e6ee77f17963dcafbaba355f508764\""
	Nov 20 21:11:59 embed-certs-121127 containerd[760]: time="2025-11-20T21:11:59.426818196Z" level=info msg="StartContainer for \"83b99b1db5fa6379ed83cc78074fa129b8e6ee77f17963dcafbaba355f508764\""
	Nov 20 21:11:59 embed-certs-121127 containerd[760]: time="2025-11-20T21:11:59.432321893Z" level=info msg="connecting to shim 83b99b1db5fa6379ed83cc78074fa129b8e6ee77f17963dcafbaba355f508764" address="unix:///run/containerd/s/4eee3db2abcfd958fd7ef761f624a48e654cb872b31fa7275b11e72af9df234d" protocol=ttrpc version=3
	Nov 20 21:11:59 embed-certs-121127 containerd[760]: time="2025-11-20T21:11:59.437609291Z" level=info msg="Container c36018563f9d2e20de9e1b378f2cb1fe887bde1deb2823e08117dfd3ab25d552: CDI devices from CRI Config.CDIDevices: []"
	Nov 20 21:11:59 embed-certs-121127 containerd[760]: time="2025-11-20T21:11:59.474604856Z" level=info msg="CreateContainer within sandbox \"a01920b2dc30af6a1cc7f00e86c45845ac05bf2adbd8ac310d3b7c4c15b1a051\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"c36018563f9d2e20de9e1b378f2cb1fe887bde1deb2823e08117dfd3ab25d552\""
	Nov 20 21:11:59 embed-certs-121127 containerd[760]: time="2025-11-20T21:11:59.478248639Z" level=info msg="StartContainer for \"c36018563f9d2e20de9e1b378f2cb1fe887bde1deb2823e08117dfd3ab25d552\""
	Nov 20 21:11:59 embed-certs-121127 containerd[760]: time="2025-11-20T21:11:59.481159534Z" level=info msg="connecting to shim c36018563f9d2e20de9e1b378f2cb1fe887bde1deb2823e08117dfd3ab25d552" address="unix:///run/containerd/s/3a5f64159e89fcec81f4ae71307e946989484d5a2c9c121667b6024ec0881013" protocol=ttrpc version=3
	Nov 20 21:11:59 embed-certs-121127 containerd[760]: time="2025-11-20T21:11:59.569925553Z" level=info msg="StartContainer for \"83b99b1db5fa6379ed83cc78074fa129b8e6ee77f17963dcafbaba355f508764\" returns successfully"
	Nov 20 21:11:59 embed-certs-121127 containerd[760]: time="2025-11-20T21:11:59.659415375Z" level=info msg="StartContainer for \"c36018563f9d2e20de9e1b378f2cb1fe887bde1deb2823e08117dfd3ab25d552\" returns successfully"
	Nov 20 21:12:02 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:02.162479404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:25b27561-eeaa-45d2-b437-daad5f809cda,Namespace:default,Attempt:0,}"
	Nov 20 21:12:02 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:02.220892818Z" level=info msg="connecting to shim b68911ae2a843ec812f1ea323f7c960cac1b3bcd708f67bb2e86a6031d26ca64" address="unix:///run/containerd/s/ed0c6c4cb14834f6fe50eade4f89c981ad74698b797f836eb30530cccb8a85c7" namespace=k8s.io protocol=ttrpc version=3
	Nov 20 21:12:02 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:02.288802633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:25b27561-eeaa-45d2-b437-daad5f809cda,Namespace:default,Attempt:0,} returns sandbox id \"b68911ae2a843ec812f1ea323f7c960cac1b3bcd708f67bb2e86a6031d26ca64\""
	Nov 20 21:12:02 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:02.292257251Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 20 21:12:04 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:04.667506519Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 21:12:04 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:04.669410094Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937189"
	Nov 20 21:12:04 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:04.671861658Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 21:12:04 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:04.675341623Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 21:12:04 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:04.675857209Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.383363745s"
	Nov 20 21:12:04 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:04.675905629Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 20 21:12:04 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:04.685492575Z" level=info msg="CreateContainer within sandbox \"b68911ae2a843ec812f1ea323f7c960cac1b3bcd708f67bb2e86a6031d26ca64\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 20 21:12:04 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:04.698361042Z" level=info msg="Container 162e6ca6e027df8081f7dc141e62cc44fa27c2d9aa83098cb2b25618557a68aa: CDI devices from CRI Config.CDIDevices: []"
	Nov 20 21:12:04 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:04.710192821Z" level=info msg="CreateContainer within sandbox \"b68911ae2a843ec812f1ea323f7c960cac1b3bcd708f67bb2e86a6031d26ca64\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"162e6ca6e027df8081f7dc141e62cc44fa27c2d9aa83098cb2b25618557a68aa\""
	Nov 20 21:12:04 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:04.711050215Z" level=info msg="StartContainer for \"162e6ca6e027df8081f7dc141e62cc44fa27c2d9aa83098cb2b25618557a68aa\""
	Nov 20 21:12:04 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:04.712226548Z" level=info msg="connecting to shim 162e6ca6e027df8081f7dc141e62cc44fa27c2d9aa83098cb2b25618557a68aa" address="unix:///run/containerd/s/ed0c6c4cb14834f6fe50eade4f89c981ad74698b797f836eb30530cccb8a85c7" protocol=ttrpc version=3
	Nov 20 21:12:04 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:04.785988753Z" level=info msg="StartContainer for \"162e6ca6e027df8081f7dc141e62cc44fa27c2d9aa83098cb2b25618557a68aa\" returns successfully"
	
	
	==> coredns [83b99b1db5fa6379ed83cc78074fa129b8e6ee77f17963dcafbaba355f508764] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44891 - 24766 "HINFO IN 6891144907147596844.857419956422712390. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013117587s
	
	
	==> describe nodes <==
	Name:               embed-certs-121127
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-121127
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=embed-certs-121127
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_11_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:11:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-121127
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:12:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:11:58 +0000   Thu, 20 Nov 2025 21:11:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:11:58 +0000   Thu, 20 Nov 2025 21:11:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:11:58 +0000   Thu, 20 Nov 2025 21:11:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:11:58 +0000   Thu, 20 Nov 2025 21:11:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-121127
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                0314476a-4c90-495e-bfa8-8db07e9365ab
	  Boot ID:                    0cc3a06a-788d-45d4-8fff-2131330a9ee0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-n27nb                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-embed-certs-121127                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-v9ltq                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-embed-certs-121127             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-embed-certs-121127    200m (10%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-cwvzr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-embed-certs-121127             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   NodeAllocatableEnforced  72s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 72s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  72s (x3 over 72s)  kubelet          Node embed-certs-121127 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    72s (x3 over 72s)  kubelet          Node embed-certs-121127 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     72s (x3 over 72s)  kubelet          Node embed-certs-121127 status is now: NodeHasSufficientPID
	  Normal   Starting                 72s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  61s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node embed-certs-121127 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node embed-certs-121127 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node embed-certs-121127 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node embed-certs-121127 event: Registered Node embed-certs-121127 in Controller
	  Normal   NodeReady                14s                kubelet          Node embed-certs-121127 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov20 20:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014399] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.498138] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.765613] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.782554] kauditd_printk_skb: 36 callbacks suppressed
	[Nov20 20:40] hrtimer: interrupt took 1888672 ns
	
	
	==> etcd [b62a3260590432c5db545b5e461f7f58f29b9300646ecdfbfcaffc0459763f55] <==
	{"level":"warn","ts":"2025-11-20T21:11:04.567554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:04.616506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:04.689947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:04.723658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:04.752665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:04.789974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:04.848267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:04.880061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:04.919101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:04.956566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:04.975509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:05.007228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:05.038893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:05.075153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:05.118876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:05.163041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:05.215680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:05.249590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:05.308291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:05.351944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:05.408906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:05.457349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:05.506315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:05.571988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:05.753086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44714","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:12:12 up 54 min,  0 user,  load average: 2.96, 3.19, 2.82
	Linux embed-certs-121127 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fc9aba71d240579afa3c8851bc29c0b698c134598a36022bdcbf5d8928accf5b] <==
	I1120 21:11:18.461328       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 21:11:18.521621       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1120 21:11:18.521760       1 main.go:148] setting mtu 1500 for CNI 
	I1120 21:11:18.521774       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 21:11:18.521788       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T21:11:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 21:11:18.727611       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 21:11:18.727632       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 21:11:18.727640       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 21:11:18.727970       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1120 21:11:48.727343       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1120 21:11:48.728269       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1120 21:11:48.728383       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1120 21:11:48.728495       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1120 21:11:50.228072       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 21:11:50.228137       1 metrics.go:72] Registering metrics
	I1120 21:11:50.228221       1 controller.go:711] "Syncing nftables rules"
	I1120 21:11:58.733890       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 21:11:58.733949       1 main.go:301] handling current node
	I1120 21:12:08.728075       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 21:12:08.728113       1 main.go:301] handling current node
	
	
	==> kube-apiserver [320335813f08375610807e0c11fe8fb16b551eaac5c72d4e8144d52c5dce11eb] <==
	I1120 21:11:07.616699       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1120 21:11:07.640802       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 21:11:07.641180       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:11:07.643863       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1120 21:11:07.665346       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 21:11:07.687126       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:11:07.690487       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:11:08.139125       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1120 21:11:08.161876       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1120 21:11:08.164326       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 21:11:09.662993       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 21:11:09.755866       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 21:11:09.965067       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1120 21:11:10.037359       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1120 21:11:10.039208       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 21:11:10.047589       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 21:11:10.481186       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 21:11:11.147163       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 21:11:11.172393       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1120 21:11:11.190094       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 21:11:16.235709       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:11:16.246198       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:11:16.287160       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1120 21:11:16.449617       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1120 21:12:11.047050       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:48048: use of closed network connection
	
	
	==> kube-controller-manager [0bbf905a025ff70fc4eb815915c61563da14a81cb1b77291f7a7aa69d28b7af2] <==
	I1120 21:11:15.672476       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1120 21:11:15.672603       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1120 21:11:15.672767       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1120 21:11:15.672840       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1120 21:11:15.672938       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1120 21:11:15.673059       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-121127"
	I1120 21:11:15.673127       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1120 21:11:15.678869       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1120 21:11:15.679069       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1120 21:11:15.679346       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 21:11:15.682562       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 21:11:15.687691       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 21:11:15.694825       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1120 21:11:15.695129       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:11:15.696163       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:11:15.701962       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 21:11:15.719448       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:11:15.719655       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 21:11:15.719749       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 21:11:15.721969       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1120 21:11:15.721988       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1120 21:11:15.725180       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1120 21:11:15.732456       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1120 21:11:15.738354       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:12:00.681549       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [64ed1b418fe6038cab3ada21f3e6088a0f3e61a04392ee0286c9253024736e16] <==
	I1120 21:11:18.245310       1 server_linux.go:53] "Using iptables proxy"
	I1120 21:11:18.403263       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 21:11:18.504707       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 21:11:18.504744       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1120 21:11:18.504827       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 21:11:18.628659       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:11:18.628716       1 server_linux.go:132] "Using iptables Proxier"
	I1120 21:11:18.679477       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 21:11:18.680132       1 server.go:527] "Version info" version="v1.34.1"
	I1120 21:11:18.680150       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:11:18.698100       1 config.go:200] "Starting service config controller"
	I1120 21:11:18.698122       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 21:11:18.698137       1 config.go:106] "Starting endpoint slice config controller"
	I1120 21:11:18.698140       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 21:11:18.698179       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 21:11:18.698184       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 21:11:18.707262       1 config.go:309] "Starting node config controller"
	I1120 21:11:18.707285       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 21:11:18.707294       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 21:11:18.799156       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 21:11:18.801752       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 21:11:18.801803       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e6dcdd9202998fd984e097f1d1684a69449a296c7bf259edd8a7797e1db7722f] <==
	I1120 21:11:07.512014       1 serving.go:386] Generated self-signed cert in-memory
	I1120 21:11:10.082797       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1120 21:11:10.082840       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:11:10.105448       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 21:11:10.106188       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1120 21:11:10.115950       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1120 21:11:10.116143       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1120 21:11:10.120404       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:11:10.120531       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:11:10.120598       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 21:11:10.120642       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 21:11:10.216883       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1120 21:11:10.228095       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:11:10.228032       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 20 21:11:12 embed-certs-121127 kubelet[1478]: I1120 21:11:12.755228    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-121127" podStartSLOduration=1.755182679 podStartE2EDuration="1.755182679s" podCreationTimestamp="2025-11-20 21:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:11:12.751108276 +0000 UTC m=+1.652136892" watchObservedRunningTime="2025-11-20 21:11:12.755182679 +0000 UTC m=+1.656211287"
	Nov 20 21:11:15 embed-certs-121127 kubelet[1478]: I1120 21:11:15.671521    1478 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 20 21:11:15 embed-certs-121127 kubelet[1478]: I1120 21:11:15.672167    1478 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 20 21:11:16 embed-certs-121127 kubelet[1478]: I1120 21:11:16.502615    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc14c174-cc63-4212-b68f-f3d6beabefd2-lib-modules\") pod \"kube-proxy-cwvzr\" (UID: \"bc14c174-cc63-4212-b68f-f3d6beabefd2\") " pod="kube-system/kube-proxy-cwvzr"
	Nov 20 21:11:16 embed-certs-121127 kubelet[1478]: I1120 21:11:16.502660    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzc7m\" (UniqueName: \"kubernetes.io/projected/bc14c174-cc63-4212-b68f-f3d6beabefd2-kube-api-access-wzc7m\") pod \"kube-proxy-cwvzr\" (UID: \"bc14c174-cc63-4212-b68f-f3d6beabefd2\") " pod="kube-system/kube-proxy-cwvzr"
	Nov 20 21:11:16 embed-certs-121127 kubelet[1478]: I1120 21:11:16.502683    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8bfde00-17f2-4c00-99e1-c1869ad89980-lib-modules\") pod \"kindnet-v9ltq\" (UID: \"d8bfde00-17f2-4c00-99e1-c1869ad89980\") " pod="kube-system/kindnet-v9ltq"
	Nov 20 21:11:16 embed-certs-121127 kubelet[1478]: I1120 21:11:16.502703    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bc14c174-cc63-4212-b68f-f3d6beabefd2-kube-proxy\") pod \"kube-proxy-cwvzr\" (UID: \"bc14c174-cc63-4212-b68f-f3d6beabefd2\") " pod="kube-system/kube-proxy-cwvzr"
	Nov 20 21:11:16 embed-certs-121127 kubelet[1478]: I1120 21:11:16.502722    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc14c174-cc63-4212-b68f-f3d6beabefd2-xtables-lock\") pod \"kube-proxy-cwvzr\" (UID: \"bc14c174-cc63-4212-b68f-f3d6beabefd2\") " pod="kube-system/kube-proxy-cwvzr"
	Nov 20 21:11:16 embed-certs-121127 kubelet[1478]: I1120 21:11:16.502740    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8bfde00-17f2-4c00-99e1-c1869ad89980-xtables-lock\") pod \"kindnet-v9ltq\" (UID: \"d8bfde00-17f2-4c00-99e1-c1869ad89980\") " pod="kube-system/kindnet-v9ltq"
	Nov 20 21:11:16 embed-certs-121127 kubelet[1478]: I1120 21:11:16.502758    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2l5x\" (UniqueName: \"kubernetes.io/projected/d8bfde00-17f2-4c00-99e1-c1869ad89980-kube-api-access-v2l5x\") pod \"kindnet-v9ltq\" (UID: \"d8bfde00-17f2-4c00-99e1-c1869ad89980\") " pod="kube-system/kindnet-v9ltq"
	Nov 20 21:11:16 embed-certs-121127 kubelet[1478]: I1120 21:11:16.502774    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d8bfde00-17f2-4c00-99e1-c1869ad89980-cni-cfg\") pod \"kindnet-v9ltq\" (UID: \"d8bfde00-17f2-4c00-99e1-c1869ad89980\") " pod="kube-system/kindnet-v9ltq"
	Nov 20 21:11:16 embed-certs-121127 kubelet[1478]: E1120 21:11:16.667232    1478 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 20 21:11:16 embed-certs-121127 kubelet[1478]: E1120 21:11:16.667276    1478 projected.go:196] Error preparing data for projected volume kube-api-access-v2l5x for pod kube-system/kindnet-v9ltq: configmap "kube-root-ca.crt" not found
	Nov 20 21:11:16 embed-certs-121127 kubelet[1478]: E1120 21:11:16.667363    1478 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8bfde00-17f2-4c00-99e1-c1869ad89980-kube-api-access-v2l5x podName:d8bfde00-17f2-4c00-99e1-c1869ad89980 nodeName:}" failed. No retries permitted until 2025-11-20 21:11:17.167337478 +0000 UTC m=+6.068366078 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v2l5x" (UniqueName: "kubernetes.io/projected/d8bfde00-17f2-4c00-99e1-c1869ad89980-kube-api-access-v2l5x") pod "kindnet-v9ltq" (UID: "d8bfde00-17f2-4c00-99e1-c1869ad89980") : configmap "kube-root-ca.crt" not found
	Nov 20 21:11:16 embed-certs-121127 kubelet[1478]: I1120 21:11:16.689178    1478 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 20 21:11:18 embed-certs-121127 kubelet[1478]: I1120 21:11:18.687564    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cwvzr" podStartSLOduration=2.687545674 podStartE2EDuration="2.687545674s" podCreationTimestamp="2025-11-20 21:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:11:18.687191797 +0000 UTC m=+7.588220405" watchObservedRunningTime="2025-11-20 21:11:18.687545674 +0000 UTC m=+7.588574274"
	Nov 20 21:11:18 embed-certs-121127 kubelet[1478]: I1120 21:11:18.741622    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-v9ltq" podStartSLOduration=2.741602951 podStartE2EDuration="2.741602951s" podCreationTimestamp="2025-11-20 21:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:11:18.741276201 +0000 UTC m=+7.642304809" watchObservedRunningTime="2025-11-20 21:11:18.741602951 +0000 UTC m=+7.642631559"
	Nov 20 21:11:58 embed-certs-121127 kubelet[1478]: I1120 21:11:58.751545    1478 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 20 21:11:58 embed-certs-121127 kubelet[1478]: I1120 21:11:58.823707    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f05711d6-b49b-4ed2-8707-4fe6758b0174-config-volume\") pod \"coredns-66bc5c9577-n27nb\" (UID: \"f05711d6-b49b-4ed2-8707-4fe6758b0174\") " pod="kube-system/coredns-66bc5c9577-n27nb"
	Nov 20 21:11:58 embed-certs-121127 kubelet[1478]: I1120 21:11:58.823926    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cst69\" (UniqueName: \"kubernetes.io/projected/f05711d6-b49b-4ed2-8707-4fe6758b0174-kube-api-access-cst69\") pod \"coredns-66bc5c9577-n27nb\" (UID: \"f05711d6-b49b-4ed2-8707-4fe6758b0174\") " pod="kube-system/coredns-66bc5c9577-n27nb"
	Nov 20 21:11:58 embed-certs-121127 kubelet[1478]: I1120 21:11:58.926743    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjwx8\" (UniqueName: \"kubernetes.io/projected/ffdf9381-425c-4f34-8177-4b2aca7e89be-kube-api-access-fjwx8\") pod \"storage-provisioner\" (UID: \"ffdf9381-425c-4f34-8177-4b2aca7e89be\") " pod="kube-system/storage-provisioner"
	Nov 20 21:11:58 embed-certs-121127 kubelet[1478]: I1120 21:11:58.926913    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ffdf9381-425c-4f34-8177-4b2aca7e89be-tmp\") pod \"storage-provisioner\" (UID: \"ffdf9381-425c-4f34-8177-4b2aca7e89be\") " pod="kube-system/storage-provisioner"
	Nov 20 21:11:59 embed-certs-121127 kubelet[1478]: I1120 21:11:59.773616    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-n27nb" podStartSLOduration=43.773594505 podStartE2EDuration="43.773594505s" podCreationTimestamp="2025-11-20 21:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:11:59.755831279 +0000 UTC m=+48.656859896" watchObservedRunningTime="2025-11-20 21:11:59.773594505 +0000 UTC m=+48.674623105"
	Nov 20 21:12:01 embed-certs-121127 kubelet[1478]: I1120 21:12:01.841866    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.841845151 podStartE2EDuration="42.841845151s" podCreationTimestamp="2025-11-20 21:11:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:11:59.803859676 +0000 UTC m=+48.704888292" watchObservedRunningTime="2025-11-20 21:12:01.841845151 +0000 UTC m=+50.742873751"
	Nov 20 21:12:01 embed-certs-121127 kubelet[1478]: I1120 21:12:01.869144    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x5cp\" (UniqueName: \"kubernetes.io/projected/25b27561-eeaa-45d2-b437-daad5f809cda-kube-api-access-7x5cp\") pod \"busybox\" (UID: \"25b27561-eeaa-45d2-b437-daad5f809cda\") " pod="default/busybox"
	
	
	==> storage-provisioner [c36018563f9d2e20de9e1b378f2cb1fe887bde1deb2823e08117dfd3ab25d552] <==
	I1120 21:11:59.644802       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1120 21:11:59.670901       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 21:11:59.671086       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1120 21:11:59.673565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:59.682728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 21:11:59.683019       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 21:11:59.683291       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-121127_ca918db6-6fd2-4b4f-a5c3-1ae904943cc4!
	I1120 21:11:59.685862       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ba3be62b-15cf-40fc-9eb6-188a47922389", APIVersion:"v1", ResourceVersion:"435", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-121127_ca918db6-6fd2-4b4f-a5c3-1ae904943cc4 became leader
	W1120 21:11:59.692121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:59.695506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 21:11:59.787310       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-121127_ca918db6-6fd2-4b4f-a5c3-1ae904943cc4!
	W1120 21:12:01.699061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:12:01.704721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:12:03.707952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:12:03.712893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:12:05.716894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:12:05.721814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:12:07.725859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:12:07.731359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:12:09.734832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:12:09.739870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:12:11.744471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:12:11.751671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-121127 -n embed-certs-121127
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-121127 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-121127
helpers_test.go:243: (dbg) docker inspect embed-certs-121127:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1e01af2d673aea0d9b723a2647bf1e9e391fc200c0895ee433b7e796ab7ba8ff",
	        "Created": "2025-11-20T21:10:39.53580006Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 216423,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:10:39.61711974Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/1e01af2d673aea0d9b723a2647bf1e9e391fc200c0895ee433b7e796ab7ba8ff/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1e01af2d673aea0d9b723a2647bf1e9e391fc200c0895ee433b7e796ab7ba8ff/hostname",
	        "HostsPath": "/var/lib/docker/containers/1e01af2d673aea0d9b723a2647bf1e9e391fc200c0895ee433b7e796ab7ba8ff/hosts",
	        "LogPath": "/var/lib/docker/containers/1e01af2d673aea0d9b723a2647bf1e9e391fc200c0895ee433b7e796ab7ba8ff/1e01af2d673aea0d9b723a2647bf1e9e391fc200c0895ee433b7e796ab7ba8ff-json.log",
	        "Name": "/embed-certs-121127",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-121127:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-121127",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1e01af2d673aea0d9b723a2647bf1e9e391fc200c0895ee433b7e796ab7ba8ff",
	                "LowerDir": "/var/lib/docker/overlay2/b509bc5c46a5e5b602aabcf6bed43ceb3128c699036cc161ce02303780625a39-init/diff:/var/lib/docker/overlay2/5105da773b59b243b777c3c083d206b6a741bd11ebc5a0283799917fe36ebbb2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b509bc5c46a5e5b602aabcf6bed43ceb3128c699036cc161ce02303780625a39/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b509bc5c46a5e5b602aabcf6bed43ceb3128c699036cc161ce02303780625a39/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b509bc5c46a5e5b602aabcf6bed43ceb3128c699036cc161ce02303780625a39/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-121127",
	                "Source": "/var/lib/docker/volumes/embed-certs-121127/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-121127",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-121127",
	                "name.minikube.sigs.k8s.io": "embed-certs-121127",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2b883e1e0c90cfa875ea3a2b30a52f2c81b3f194aeb6a53564ddcfeafbd3aaf0",
	            "SandboxKey": "/var/run/docker/netns/2b883e1e0c90",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-121127": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:22:aa:02:a3:6e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bbac06c0462da8b50eaaaee1c67cbdbf5ee119e8c368ef8ccec363fe3a0deee0",
	                    "EndpointID": "61cadc264e16494596cca35b6c2c2024186f05d928c7865ca3263765cd3d68c7",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-121127",
	                        "1e01af2d673a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-121127 -n embed-certs-121127
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-121127 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-121127 logs -n 25: (1.202117316s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ start   │ -p cert-expiration-339813 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-339813   │ jenkins │ v1.37.0 │ 20 Nov 25 21:06 UTC │ 20 Nov 25 21:07 UTC │
	│ ssh     │ force-systemd-env-444240 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-444240 │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ delete  │ -p force-systemd-env-444240                                                                                                                                                                                                                         │ force-systemd-env-444240 │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ start   │ -p cert-options-530158 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-530158      │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ ssh     │ cert-options-530158 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-530158      │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ ssh     │ -p cert-options-530158 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-530158      │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ delete  │ -p cert-options-530158                                                                                                                                                                                                                              │ cert-options-530158      │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:07 UTC │
	│ start   │ -p old-k8s-version-023521 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-023521   │ jenkins │ v1.37.0 │ 20 Nov 25 21:07 UTC │ 20 Nov 25 21:08 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-023521 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-023521   │ jenkins │ v1.37.0 │ 20 Nov 25 21:09 UTC │ 20 Nov 25 21:09 UTC │
	│ stop    │ -p old-k8s-version-023521 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-023521   │ jenkins │ v1.37.0 │ 20 Nov 25 21:09 UTC │ 20 Nov 25 21:09 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-023521 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-023521   │ jenkins │ v1.37.0 │ 20 Nov 25 21:09 UTC │ 20 Nov 25 21:09 UTC │
	│ start   │ -p old-k8s-version-023521 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-023521   │ jenkins │ v1.37.0 │ 20 Nov 25 21:09 UTC │ 20 Nov 25 21:10 UTC │
	│ start   │ -p cert-expiration-339813 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-339813   │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ delete  │ -p cert-expiration-339813                                                                                                                                                                                                                           │ cert-expiration-339813   │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ image   │ old-k8s-version-023521 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-023521   │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ start   │ -p no-preload-882483 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-882483        │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:11 UTC │
	│ pause   │ -p old-k8s-version-023521 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-023521   │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ unpause │ -p old-k8s-version-023521 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-023521   │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ delete  │ -p old-k8s-version-023521                                                                                                                                                                                                                           │ old-k8s-version-023521   │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ delete  │ -p old-k8s-version-023521                                                                                                                                                                                                                           │ old-k8s-version-023521   │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:10 UTC │
	│ start   │ -p embed-certs-121127 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-121127       │ jenkins │ v1.37.0 │ 20 Nov 25 21:10 UTC │ 20 Nov 25 21:12 UTC │
	│ addons  │ enable metrics-server -p no-preload-882483 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-882483        │ jenkins │ v1.37.0 │ 20 Nov 25 21:11 UTC │ 20 Nov 25 21:11 UTC │
	│ stop    │ -p no-preload-882483 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-882483        │ jenkins │ v1.37.0 │ 20 Nov 25 21:11 UTC │ 20 Nov 25 21:12 UTC │
	│ addons  │ enable dashboard -p no-preload-882483 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-882483        │ jenkins │ v1.37.0 │ 20 Nov 25 21:12 UTC │ 20 Nov 25 21:12 UTC │
	│ start   │ -p no-preload-882483 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-882483        │ jenkins │ v1.37.0 │ 20 Nov 25 21:12 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:12:11
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:12:11.632610  222253 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:12:11.632799  222253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:12:11.632825  222253 out.go:374] Setting ErrFile to fd 2...
	I1120 21:12:11.632846  222253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:12:11.633592  222253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-2300/.minikube/bin
	I1120 21:12:11.634910  222253 out.go:368] Setting JSON to false
	I1120 21:12:11.636125  222253 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3281,"bootTime":1763669851,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1120 21:12:11.636213  222253 start.go:143] virtualization:  
	I1120 21:12:11.640519  222253 out.go:179] * [no-preload-882483] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 21:12:11.642836  222253 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:12:11.642916  222253 notify.go:221] Checking for updates...
	I1120 21:12:11.650405  222253 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:12:11.653406  222253 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-2300/kubeconfig
	I1120 21:12:11.656392  222253 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-2300/.minikube
	I1120 21:12:11.659287  222253 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 21:12:11.663069  222253 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:12:11.666390  222253 config.go:182] Loaded profile config "no-preload-882483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 21:12:11.667082  222253 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:12:11.702740  222253 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 21:12:11.702859  222253 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:12:11.865231  222253 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 21:12:11.852030989 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:12:11.865330  222253 docker.go:319] overlay module found
	I1120 21:12:11.868585  222253 out.go:179] * Using the docker driver based on existing profile
	I1120 21:12:11.871453  222253 start.go:309] selected driver: docker
	I1120 21:12:11.871473  222253 start.go:930] validating driver "docker" against &{Name:no-preload-882483 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-882483 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:12:11.871659  222253 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:12:11.872457  222253 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:12:11.962649  222253 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 21:12:11.949670634 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:12:11.962980  222253 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:12:11.963007  222253 cni.go:84] Creating CNI manager for ""
	I1120 21:12:11.963057  222253 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 21:12:11.963087  222253 start.go:353] cluster config:
	{Name:no-preload-882483 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-882483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:12:11.966345  222253 out.go:179] * Starting "no-preload-882483" primary control-plane node in "no-preload-882483" cluster
	I1120 21:12:11.969099  222253 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1120 21:12:11.972034  222253 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:12:11.974953  222253 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1120 21:12:11.975096  222253 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/config.json ...
	I1120 21:12:11.975388  222253 cache.go:107] acquiring lock: {Name:mk1789cadcdd851b64b95deb759bd9325b3efd73 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:12:11.975462  222253 cache.go:115] /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1120 21:12:11.975470  222253 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 90.75µs
	I1120 21:12:11.975478  222253 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1120 21:12:11.975489  222253 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:12:11.975623  222253 cache.go:107] acquiring lock: {Name:mke6ba563119b8280f1b5a4b3d855834efbbd3a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:12:11.975674  222253 cache.go:115] /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1120 21:12:11.975682  222253 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 64.239µs
	I1120 21:12:11.975688  222253 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1120 21:12:11.975698  222253 cache.go:107] acquiring lock: {Name:mk3af52f1535fcf82cd13fd7bfd764bd4e8f900f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:12:11.975727  222253 cache.go:115] /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1120 21:12:11.975732  222253 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 34.987µs
	I1120 21:12:11.975742  222253 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1120 21:12:11.975751  222253 cache.go:107] acquiring lock: {Name:mk6b382914e40603b00fafc9275dfb33fe9b950a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:12:11.975777  222253 cache.go:115] /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1120 21:12:11.975782  222253 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 32.009µs
	I1120 21:12:11.975787  222253 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1120 21:12:11.975796  222253 cache.go:107] acquiring lock: {Name:mk4fe54fb8f1eb60295be31ad3bd6de77de8c4fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:12:11.975821  222253 cache.go:115] /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1120 21:12:11.975825  222253 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 30.63µs
	I1120 21:12:11.975831  222253 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1120 21:12:11.975846  222253 cache.go:107] acquiring lock: {Name:mk164f849cfcf5d98106b7563a4ad485e530e3ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:12:11.975871  222253 cache.go:115] /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1120 21:12:11.975876  222253 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 36.587µs
	I1120 21:12:11.975881  222253 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1120 21:12:11.975900  222253 cache.go:107] acquiring lock: {Name:mk229e4a6fba52cbecd106e4b196944c3dd16212 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:12:11.975925  222253 cache.go:115] /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1120 21:12:11.975931  222253 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 31.771µs
	I1120 21:12:11.975936  222253 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1120 21:12:11.975946  222253 cache.go:107] acquiring lock: {Name:mk7a8425beb85e9678b00ff7628973499a24e174 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:12:11.975971  222253 cache.go:115] /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1120 21:12:11.975976  222253 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 31.295µs
	I1120 21:12:11.975981  222253 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21923-2300/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1120 21:12:11.975987  222253 cache.go:87] Successfully saved all images to host disk.
	I1120 21:12:11.994718  222253 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:12:11.994738  222253 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:12:11.994757  222253 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:12:11.994794  222253 start.go:360] acquireMachinesLock for no-preload-882483: {Name:mkf328e09745e38a49d77a84aa3d82361f133809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:12:11.994856  222253 start.go:364] duration metric: took 47.583µs to acquireMachinesLock for "no-preload-882483"
	I1120 21:12:11.994952  222253 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:12:11.994961  222253 fix.go:54] fixHost starting: 
	I1120 21:12:11.995257  222253 cli_runner.go:164] Run: docker container inspect no-preload-882483 --format={{.State.Status}}
	I1120 21:12:12.019073  222253 fix.go:112] recreateIfNeeded on no-preload-882483: state=Stopped err=<nil>
	W1120 21:12:12.019105  222253 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	162e6ca6e027d       1611cd07b61d5       9 seconds ago        Running             busybox                   0                   b68911ae2a843       busybox                                      default
	c36018563f9d2       ba04bb24b9575       15 seconds ago       Running             storage-provisioner       0                   a01920b2dc30a       storage-provisioner                          kube-system
	83b99b1db5fa6       138784d87c9c5       15 seconds ago       Running             coredns                   0                   0eaad59ebfd2d       coredns-66bc5c9577-n27nb                     kube-system
	fc9aba71d2405       b1a8c6f707935       56 seconds ago       Running             kindnet-cni               0                   7287147637b36       kindnet-v9ltq                                kube-system
	64ed1b418fe60       05baa95f5142d       57 seconds ago       Running             kube-proxy                0                   fd05db422d173       kube-proxy-cwvzr                             kube-system
	e6dcdd9202998       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   c56719205c082       kube-scheduler-embed-certs-121127            kube-system
	320335813f083       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   4f175f07d0408       kube-apiserver-embed-certs-121127            kube-system
	0bbf905a025ff       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   7f8f085e427f2       kube-controller-manager-embed-certs-121127   kube-system
	b62a326059043       a1894772a478e       About a minute ago   Running             etcd                      0                   d0e238102e598       etcd-embed-certs-121127                      kube-system
	
	
	==> containerd <==
	Nov 20 21:11:59 embed-certs-121127 containerd[760]: time="2025-11-20T21:11:59.400745380Z" level=info msg="CreateContainer within sandbox \"0eaad59ebfd2d15d23b4e2634037ea62eaba9ef7dab4d648f29363f99d7ab2c7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"83b99b1db5fa6379ed83cc78074fa129b8e6ee77f17963dcafbaba355f508764\""
	Nov 20 21:11:59 embed-certs-121127 containerd[760]: time="2025-11-20T21:11:59.426818196Z" level=info msg="StartContainer for \"83b99b1db5fa6379ed83cc78074fa129b8e6ee77f17963dcafbaba355f508764\""
	Nov 20 21:11:59 embed-certs-121127 containerd[760]: time="2025-11-20T21:11:59.432321893Z" level=info msg="connecting to shim 83b99b1db5fa6379ed83cc78074fa129b8e6ee77f17963dcafbaba355f508764" address="unix:///run/containerd/s/4eee3db2abcfd958fd7ef761f624a48e654cb872b31fa7275b11e72af9df234d" protocol=ttrpc version=3
	Nov 20 21:11:59 embed-certs-121127 containerd[760]: time="2025-11-20T21:11:59.437609291Z" level=info msg="Container c36018563f9d2e20de9e1b378f2cb1fe887bde1deb2823e08117dfd3ab25d552: CDI devices from CRI Config.CDIDevices: []"
	Nov 20 21:11:59 embed-certs-121127 containerd[760]: time="2025-11-20T21:11:59.474604856Z" level=info msg="CreateContainer within sandbox \"a01920b2dc30af6a1cc7f00e86c45845ac05bf2adbd8ac310d3b7c4c15b1a051\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"c36018563f9d2e20de9e1b378f2cb1fe887bde1deb2823e08117dfd3ab25d552\""
	Nov 20 21:11:59 embed-certs-121127 containerd[760]: time="2025-11-20T21:11:59.478248639Z" level=info msg="StartContainer for \"c36018563f9d2e20de9e1b378f2cb1fe887bde1deb2823e08117dfd3ab25d552\""
	Nov 20 21:11:59 embed-certs-121127 containerd[760]: time="2025-11-20T21:11:59.481159534Z" level=info msg="connecting to shim c36018563f9d2e20de9e1b378f2cb1fe887bde1deb2823e08117dfd3ab25d552" address="unix:///run/containerd/s/3a5f64159e89fcec81f4ae71307e946989484d5a2c9c121667b6024ec0881013" protocol=ttrpc version=3
	Nov 20 21:11:59 embed-certs-121127 containerd[760]: time="2025-11-20T21:11:59.569925553Z" level=info msg="StartContainer for \"83b99b1db5fa6379ed83cc78074fa129b8e6ee77f17963dcafbaba355f508764\" returns successfully"
	Nov 20 21:11:59 embed-certs-121127 containerd[760]: time="2025-11-20T21:11:59.659415375Z" level=info msg="StartContainer for \"c36018563f9d2e20de9e1b378f2cb1fe887bde1deb2823e08117dfd3ab25d552\" returns successfully"
	Nov 20 21:12:02 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:02.162479404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:25b27561-eeaa-45d2-b437-daad5f809cda,Namespace:default,Attempt:0,}"
	Nov 20 21:12:02 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:02.220892818Z" level=info msg="connecting to shim b68911ae2a843ec812f1ea323f7c960cac1b3bcd708f67bb2e86a6031d26ca64" address="unix:///run/containerd/s/ed0c6c4cb14834f6fe50eade4f89c981ad74698b797f836eb30530cccb8a85c7" namespace=k8s.io protocol=ttrpc version=3
	Nov 20 21:12:02 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:02.288802633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:25b27561-eeaa-45d2-b437-daad5f809cda,Namespace:default,Attempt:0,} returns sandbox id \"b68911ae2a843ec812f1ea323f7c960cac1b3bcd708f67bb2e86a6031d26ca64\""
	Nov 20 21:12:02 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:02.292257251Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 20 21:12:04 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:04.667506519Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 21:12:04 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:04.669410094Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937189"
	Nov 20 21:12:04 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:04.671861658Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 21:12:04 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:04.675341623Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 21:12:04 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:04.675857209Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.383363745s"
	Nov 20 21:12:04 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:04.675905629Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 20 21:12:04 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:04.685492575Z" level=info msg="CreateContainer within sandbox \"b68911ae2a843ec812f1ea323f7c960cac1b3bcd708f67bb2e86a6031d26ca64\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 20 21:12:04 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:04.698361042Z" level=info msg="Container 162e6ca6e027df8081f7dc141e62cc44fa27c2d9aa83098cb2b25618557a68aa: CDI devices from CRI Config.CDIDevices: []"
	Nov 20 21:12:04 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:04.710192821Z" level=info msg="CreateContainer within sandbox \"b68911ae2a843ec812f1ea323f7c960cac1b3bcd708f67bb2e86a6031d26ca64\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"162e6ca6e027df8081f7dc141e62cc44fa27c2d9aa83098cb2b25618557a68aa\""
	Nov 20 21:12:04 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:04.711050215Z" level=info msg="StartContainer for \"162e6ca6e027df8081f7dc141e62cc44fa27c2d9aa83098cb2b25618557a68aa\""
	Nov 20 21:12:04 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:04.712226548Z" level=info msg="connecting to shim 162e6ca6e027df8081f7dc141e62cc44fa27c2d9aa83098cb2b25618557a68aa" address="unix:///run/containerd/s/ed0c6c4cb14834f6fe50eade4f89c981ad74698b797f836eb30530cccb8a85c7" protocol=ttrpc version=3
	Nov 20 21:12:04 embed-certs-121127 containerd[760]: time="2025-11-20T21:12:04.785988753Z" level=info msg="StartContainer for \"162e6ca6e027df8081f7dc141e62cc44fa27c2d9aa83098cb2b25618557a68aa\" returns successfully"
	
	
	==> coredns [83b99b1db5fa6379ed83cc78074fa129b8e6ee77f17963dcafbaba355f508764] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44891 - 24766 "HINFO IN 6891144907147596844.857419956422712390. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013117587s
	
	
	==> describe nodes <==
	Name:               embed-certs-121127
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-121127
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=embed-certs-121127
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_11_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:11:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-121127
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:12:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:12:12 +0000   Thu, 20 Nov 2025 21:11:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:12:12 +0000   Thu, 20 Nov 2025 21:11:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:12:12 +0000   Thu, 20 Nov 2025 21:11:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:12:12 +0000   Thu, 20 Nov 2025 21:11:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-121127
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                0314476a-4c90-495e-bfa8-8db07e9365ab
	  Boot ID:                    0cc3a06a-788d-45d4-8fff-2131330a9ee0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-66bc5c9577-n27nb                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     58s
	  kube-system                 etcd-embed-certs-121127                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         63s
	  kube-system                 kindnet-v9ltq                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      58s
	  kube-system                 kube-apiserver-embed-certs-121127             250m (12%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-controller-manager-embed-certs-121127    200m (10%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-proxy-cwvzr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-scheduler-embed-certs-121127             100m (5%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 56s                kube-proxy       
	  Normal   NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 74s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  74s (x3 over 74s)  kubelet          Node embed-certs-121127 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    74s (x3 over 74s)  kubelet          Node embed-certs-121127 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     74s (x3 over 74s)  kubelet          Node embed-certs-121127 status is now: NodeHasSufficientPID
	  Normal   Starting                 74s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 63s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  63s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  63s                kubelet          Node embed-certs-121127 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s                kubelet          Node embed-certs-121127 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s                kubelet          Node embed-certs-121127 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           59s                node-controller  Node embed-certs-121127 event: Registered Node embed-certs-121127 in Controller
	  Normal   NodeReady                16s                kubelet          Node embed-certs-121127 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov20 20:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014399] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.498138] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.765613] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.782554] kauditd_printk_skb: 36 callbacks suppressed
	[Nov20 20:40] hrtimer: interrupt took 1888672 ns
	
	
	==> etcd [b62a3260590432c5db545b5e461f7f58f29b9300646ecdfbfcaffc0459763f55] <==
	{"level":"warn","ts":"2025-11-20T21:11:04.567554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:04.616506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:04.689947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:04.723658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:04.752665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:04.789974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:04.848267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:04.880061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:04.919101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:04.956566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:04.975509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:05.007228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:05.038893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:05.075153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:05.118876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:05.163041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:05.215680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:05.249590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:05.308291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:05.351944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:05.408906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:05.457349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:05.506315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:05.571988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:11:05.753086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44714","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:12:14 up 54 min,  0 user,  load average: 2.89, 3.17, 2.81
	Linux embed-certs-121127 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fc9aba71d240579afa3c8851bc29c0b698c134598a36022bdcbf5d8928accf5b] <==
	I1120 21:11:18.461328       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 21:11:18.521621       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1120 21:11:18.521760       1 main.go:148] setting mtu 1500 for CNI 
	I1120 21:11:18.521774       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 21:11:18.521788       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T21:11:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 21:11:18.727611       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 21:11:18.727632       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 21:11:18.727640       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 21:11:18.727970       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1120 21:11:48.727343       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1120 21:11:48.728269       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1120 21:11:48.728383       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1120 21:11:48.728495       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1120 21:11:50.228072       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 21:11:50.228137       1 metrics.go:72] Registering metrics
	I1120 21:11:50.228221       1 controller.go:711] "Syncing nftables rules"
	I1120 21:11:58.733890       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 21:11:58.733949       1 main.go:301] handling current node
	I1120 21:12:08.728075       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 21:12:08.728113       1 main.go:301] handling current node
	
	
	==> kube-apiserver [320335813f08375610807e0c11fe8fb16b551eaac5c72d4e8144d52c5dce11eb] <==
	I1120 21:11:07.616699       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1120 21:11:07.640802       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 21:11:07.641180       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:11:07.643863       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1120 21:11:07.665346       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 21:11:07.687126       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:11:07.690487       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:11:08.139125       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1120 21:11:08.161876       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1120 21:11:08.164326       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 21:11:09.662993       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 21:11:09.755866       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 21:11:09.965067       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1120 21:11:10.037359       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1120 21:11:10.039208       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 21:11:10.047589       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 21:11:10.481186       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 21:11:11.147163       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 21:11:11.172393       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1120 21:11:11.190094       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 21:11:16.235709       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:11:16.246198       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:11:16.287160       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1120 21:11:16.449617       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1120 21:12:11.047050       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:48048: use of closed network connection
	
	
	==> kube-controller-manager [0bbf905a025ff70fc4eb815915c61563da14a81cb1b77291f7a7aa69d28b7af2] <==
	I1120 21:11:15.672476       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1120 21:11:15.672603       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1120 21:11:15.672767       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1120 21:11:15.672840       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1120 21:11:15.672938       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1120 21:11:15.673059       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-121127"
	I1120 21:11:15.673127       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1120 21:11:15.678869       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1120 21:11:15.679069       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1120 21:11:15.679346       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 21:11:15.682562       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 21:11:15.687691       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 21:11:15.694825       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1120 21:11:15.695129       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:11:15.696163       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:11:15.701962       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 21:11:15.719448       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:11:15.719655       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 21:11:15.719749       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 21:11:15.721969       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1120 21:11:15.721988       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1120 21:11:15.725180       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1120 21:11:15.732456       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1120 21:11:15.738354       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:12:00.681549       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [64ed1b418fe6038cab3ada21f3e6088a0f3e61a04392ee0286c9253024736e16] <==
	I1120 21:11:18.245310       1 server_linux.go:53] "Using iptables proxy"
	I1120 21:11:18.403263       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 21:11:18.504707       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 21:11:18.504744       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1120 21:11:18.504827       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 21:11:18.628659       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:11:18.628716       1 server_linux.go:132] "Using iptables Proxier"
	I1120 21:11:18.679477       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 21:11:18.680132       1 server.go:527] "Version info" version="v1.34.1"
	I1120 21:11:18.680150       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:11:18.698100       1 config.go:200] "Starting service config controller"
	I1120 21:11:18.698122       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 21:11:18.698137       1 config.go:106] "Starting endpoint slice config controller"
	I1120 21:11:18.698140       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 21:11:18.698179       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 21:11:18.698184       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 21:11:18.707262       1 config.go:309] "Starting node config controller"
	I1120 21:11:18.707285       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 21:11:18.707294       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 21:11:18.799156       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 21:11:18.801752       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 21:11:18.801803       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e6dcdd9202998fd984e097f1d1684a69449a296c7bf259edd8a7797e1db7722f] <==
	I1120 21:11:07.512014       1 serving.go:386] Generated self-signed cert in-memory
	I1120 21:11:10.082797       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1120 21:11:10.082840       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:11:10.105448       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 21:11:10.106188       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1120 21:11:10.115950       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1120 21:11:10.116143       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1120 21:11:10.120404       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:11:10.120531       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:11:10.120598       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 21:11:10.120642       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 21:11:10.216883       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1120 21:11:10.228095       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:11:10.228032       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 20 21:11:12 embed-certs-121127 kubelet[1478]: I1120 21:11:12.755228    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-121127" podStartSLOduration=1.755182679 podStartE2EDuration="1.755182679s" podCreationTimestamp="2025-11-20 21:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:11:12.751108276 +0000 UTC m=+1.652136892" watchObservedRunningTime="2025-11-20 21:11:12.755182679 +0000 UTC m=+1.656211287"
	Nov 20 21:11:15 embed-certs-121127 kubelet[1478]: I1120 21:11:15.671521    1478 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 20 21:11:15 embed-certs-121127 kubelet[1478]: I1120 21:11:15.672167    1478 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 20 21:11:16 embed-certs-121127 kubelet[1478]: I1120 21:11:16.502615    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc14c174-cc63-4212-b68f-f3d6beabefd2-lib-modules\") pod \"kube-proxy-cwvzr\" (UID: \"bc14c174-cc63-4212-b68f-f3d6beabefd2\") " pod="kube-system/kube-proxy-cwvzr"
	Nov 20 21:11:16 embed-certs-121127 kubelet[1478]: I1120 21:11:16.502660    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzc7m\" (UniqueName: \"kubernetes.io/projected/bc14c174-cc63-4212-b68f-f3d6beabefd2-kube-api-access-wzc7m\") pod \"kube-proxy-cwvzr\" (UID: \"bc14c174-cc63-4212-b68f-f3d6beabefd2\") " pod="kube-system/kube-proxy-cwvzr"
	Nov 20 21:11:16 embed-certs-121127 kubelet[1478]: I1120 21:11:16.502683    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8bfde00-17f2-4c00-99e1-c1869ad89980-lib-modules\") pod \"kindnet-v9ltq\" (UID: \"d8bfde00-17f2-4c00-99e1-c1869ad89980\") " pod="kube-system/kindnet-v9ltq"
	Nov 20 21:11:16 embed-certs-121127 kubelet[1478]: I1120 21:11:16.502703    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bc14c174-cc63-4212-b68f-f3d6beabefd2-kube-proxy\") pod \"kube-proxy-cwvzr\" (UID: \"bc14c174-cc63-4212-b68f-f3d6beabefd2\") " pod="kube-system/kube-proxy-cwvzr"
	Nov 20 21:11:16 embed-certs-121127 kubelet[1478]: I1120 21:11:16.502722    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc14c174-cc63-4212-b68f-f3d6beabefd2-xtables-lock\") pod \"kube-proxy-cwvzr\" (UID: \"bc14c174-cc63-4212-b68f-f3d6beabefd2\") " pod="kube-system/kube-proxy-cwvzr"
	Nov 20 21:11:16 embed-certs-121127 kubelet[1478]: I1120 21:11:16.502740    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8bfde00-17f2-4c00-99e1-c1869ad89980-xtables-lock\") pod \"kindnet-v9ltq\" (UID: \"d8bfde00-17f2-4c00-99e1-c1869ad89980\") " pod="kube-system/kindnet-v9ltq"
	Nov 20 21:11:16 embed-certs-121127 kubelet[1478]: I1120 21:11:16.502758    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2l5x\" (UniqueName: \"kubernetes.io/projected/d8bfde00-17f2-4c00-99e1-c1869ad89980-kube-api-access-v2l5x\") pod \"kindnet-v9ltq\" (UID: \"d8bfde00-17f2-4c00-99e1-c1869ad89980\") " pod="kube-system/kindnet-v9ltq"
	Nov 20 21:11:16 embed-certs-121127 kubelet[1478]: I1120 21:11:16.502774    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d8bfde00-17f2-4c00-99e1-c1869ad89980-cni-cfg\") pod \"kindnet-v9ltq\" (UID: \"d8bfde00-17f2-4c00-99e1-c1869ad89980\") " pod="kube-system/kindnet-v9ltq"
	Nov 20 21:11:16 embed-certs-121127 kubelet[1478]: E1120 21:11:16.667232    1478 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 20 21:11:16 embed-certs-121127 kubelet[1478]: E1120 21:11:16.667276    1478 projected.go:196] Error preparing data for projected volume kube-api-access-v2l5x for pod kube-system/kindnet-v9ltq: configmap "kube-root-ca.crt" not found
	Nov 20 21:11:16 embed-certs-121127 kubelet[1478]: E1120 21:11:16.667363    1478 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8bfde00-17f2-4c00-99e1-c1869ad89980-kube-api-access-v2l5x podName:d8bfde00-17f2-4c00-99e1-c1869ad89980 nodeName:}" failed. No retries permitted until 2025-11-20 21:11:17.167337478 +0000 UTC m=+6.068366078 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v2l5x" (UniqueName: "kubernetes.io/projected/d8bfde00-17f2-4c00-99e1-c1869ad89980-kube-api-access-v2l5x") pod "kindnet-v9ltq" (UID: "d8bfde00-17f2-4c00-99e1-c1869ad89980") : configmap "kube-root-ca.crt" not found
	Nov 20 21:11:16 embed-certs-121127 kubelet[1478]: I1120 21:11:16.689178    1478 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 20 21:11:18 embed-certs-121127 kubelet[1478]: I1120 21:11:18.687564    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cwvzr" podStartSLOduration=2.687545674 podStartE2EDuration="2.687545674s" podCreationTimestamp="2025-11-20 21:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:11:18.687191797 +0000 UTC m=+7.588220405" watchObservedRunningTime="2025-11-20 21:11:18.687545674 +0000 UTC m=+7.588574274"
	Nov 20 21:11:18 embed-certs-121127 kubelet[1478]: I1120 21:11:18.741622    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-v9ltq" podStartSLOduration=2.741602951 podStartE2EDuration="2.741602951s" podCreationTimestamp="2025-11-20 21:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:11:18.741276201 +0000 UTC m=+7.642304809" watchObservedRunningTime="2025-11-20 21:11:18.741602951 +0000 UTC m=+7.642631559"
	Nov 20 21:11:58 embed-certs-121127 kubelet[1478]: I1120 21:11:58.751545    1478 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 20 21:11:58 embed-certs-121127 kubelet[1478]: I1120 21:11:58.823707    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f05711d6-b49b-4ed2-8707-4fe6758b0174-config-volume\") pod \"coredns-66bc5c9577-n27nb\" (UID: \"f05711d6-b49b-4ed2-8707-4fe6758b0174\") " pod="kube-system/coredns-66bc5c9577-n27nb"
	Nov 20 21:11:58 embed-certs-121127 kubelet[1478]: I1120 21:11:58.823926    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cst69\" (UniqueName: \"kubernetes.io/projected/f05711d6-b49b-4ed2-8707-4fe6758b0174-kube-api-access-cst69\") pod \"coredns-66bc5c9577-n27nb\" (UID: \"f05711d6-b49b-4ed2-8707-4fe6758b0174\") " pod="kube-system/coredns-66bc5c9577-n27nb"
	Nov 20 21:11:58 embed-certs-121127 kubelet[1478]: I1120 21:11:58.926743    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjwx8\" (UniqueName: \"kubernetes.io/projected/ffdf9381-425c-4f34-8177-4b2aca7e89be-kube-api-access-fjwx8\") pod \"storage-provisioner\" (UID: \"ffdf9381-425c-4f34-8177-4b2aca7e89be\") " pod="kube-system/storage-provisioner"
	Nov 20 21:11:58 embed-certs-121127 kubelet[1478]: I1120 21:11:58.926913    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ffdf9381-425c-4f34-8177-4b2aca7e89be-tmp\") pod \"storage-provisioner\" (UID: \"ffdf9381-425c-4f34-8177-4b2aca7e89be\") " pod="kube-system/storage-provisioner"
	Nov 20 21:11:59 embed-certs-121127 kubelet[1478]: I1120 21:11:59.773616    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-n27nb" podStartSLOduration=43.773594505 podStartE2EDuration="43.773594505s" podCreationTimestamp="2025-11-20 21:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:11:59.755831279 +0000 UTC m=+48.656859896" watchObservedRunningTime="2025-11-20 21:11:59.773594505 +0000 UTC m=+48.674623105"
	Nov 20 21:12:01 embed-certs-121127 kubelet[1478]: I1120 21:12:01.841866    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.841845151 podStartE2EDuration="42.841845151s" podCreationTimestamp="2025-11-20 21:11:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:11:59.803859676 +0000 UTC m=+48.704888292" watchObservedRunningTime="2025-11-20 21:12:01.841845151 +0000 UTC m=+50.742873751"
	Nov 20 21:12:01 embed-certs-121127 kubelet[1478]: I1120 21:12:01.869144    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x5cp\" (UniqueName: \"kubernetes.io/projected/25b27561-eeaa-45d2-b437-daad5f809cda-kube-api-access-7x5cp\") pod \"busybox\" (UID: \"25b27561-eeaa-45d2-b437-daad5f809cda\") " pod="default/busybox"
	
	
	==> storage-provisioner [c36018563f9d2e20de9e1b378f2cb1fe887bde1deb2823e08117dfd3ab25d552] <==
	I1120 21:11:59.644802       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1120 21:11:59.670901       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 21:11:59.671086       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1120 21:11:59.673565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:59.682728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 21:11:59.683019       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 21:11:59.683291       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-121127_ca918db6-6fd2-4b4f-a5c3-1ae904943cc4!
	I1120 21:11:59.685862       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ba3be62b-15cf-40fc-9eb6-188a47922389", APIVersion:"v1", ResourceVersion:"435", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-121127_ca918db6-6fd2-4b4f-a5c3-1ae904943cc4 became leader
	W1120 21:11:59.692121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:11:59.695506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 21:11:59.787310       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-121127_ca918db6-6fd2-4b4f-a5c3-1ae904943cc4!
	W1120 21:12:01.699061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:12:01.704721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:12:03.707952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:12:03.712893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:12:05.716894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:12:05.721814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:12:07.725859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:12:07.731359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:12:09.734832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:12:09.739870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:12:11.744471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:12:11.751671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:12:13.755764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:12:13.760874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-121127 -n embed-certs-121127
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-121127 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (14.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (16.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-588348 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [53e62629-f375-4ed6-9baf-052a28f0f0fc] Pending
helpers_test.go:352: "busybox" [53e62629-f375-4ed6-9baf-052a28f0f0fc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [53e62629-f375-4ed6-9baf-052a28f0f0fc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003166513s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-588348 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-588348
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-588348:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "80a56514fc4d1c64f088f0462702334929bed729f22968f906e9688de6cbc854",
	        "Created": "2025-11-20T21:13:25.451039613Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 230410,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:13:25.522234223Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/80a56514fc4d1c64f088f0462702334929bed729f22968f906e9688de6cbc854/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/80a56514fc4d1c64f088f0462702334929bed729f22968f906e9688de6cbc854/hostname",
	        "HostsPath": "/var/lib/docker/containers/80a56514fc4d1c64f088f0462702334929bed729f22968f906e9688de6cbc854/hosts",
	        "LogPath": "/var/lib/docker/containers/80a56514fc4d1c64f088f0462702334929bed729f22968f906e9688de6cbc854/80a56514fc4d1c64f088f0462702334929bed729f22968f906e9688de6cbc854-json.log",
	        "Name": "/default-k8s-diff-port-588348",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-588348:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-588348",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "80a56514fc4d1c64f088f0462702334929bed729f22968f906e9688de6cbc854",
	                "LowerDir": "/var/lib/docker/overlay2/2714d5748cecfc66f983f3029497fa1155487093ea18341d0874b46d82487abb-init/diff:/var/lib/docker/overlay2/5105da773b59b243b777c3c083d206b6a741bd11ebc5a0283799917fe36ebbb2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2714d5748cecfc66f983f3029497fa1155487093ea18341d0874b46d82487abb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2714d5748cecfc66f983f3029497fa1155487093ea18341d0874b46d82487abb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2714d5748cecfc66f983f3029497fa1155487093ea18341d0874b46d82487abb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-588348",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-588348/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-588348",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-588348",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-588348",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1a4c8702905e517624184358309ab458ec5478ff3022cd210ba2571d4146b098",
	            "SandboxKey": "/var/run/docker/netns/1a4c8702905e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-588348": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:42:e2:da:01:dc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cd51cbccc34af9946874f089ffbed15390ea1be54b0475ffe986da80be71a59e",
	                    "EndpointID": "e00178d954410c7645268591bb5d9b63873939d4661fb6fad92a2cb11721946f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-588348",
	                        "80a56514fc4d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-588348 -n default-k8s-diff-port-588348
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-588348 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-588348 logs -n 25: (1.770256079s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ addons  │ enable dashboard -p embed-certs-121127 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-121127           │ jenkins │ v1.37.0 │ 20 Nov 25 21:12 UTC │ 20 Nov 25 21:12 UTC │
	│ start   │ -p embed-certs-121127 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-121127           │ jenkins │ v1.37.0 │ 20 Nov 25 21:12 UTC │ 20 Nov 25 21:13 UTC │
	│ image   │ no-preload-882483 image list --format=json                                                                                                                                                                                                          │ no-preload-882483            │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │ 20 Nov 25 21:13 UTC │
	│ pause   │ -p no-preload-882483 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-882483            │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │ 20 Nov 25 21:13 UTC │
	│ unpause │ -p no-preload-882483 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-882483            │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │ 20 Nov 25 21:13 UTC │
	│ delete  │ -p no-preload-882483                                                                                                                                                                                                                                │ no-preload-882483            │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │ 20 Nov 25 21:13 UTC │
	│ delete  │ -p no-preload-882483                                                                                                                                                                                                                                │ no-preload-882483            │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │ 20 Nov 25 21:13 UTC │
	│ delete  │ -p disable-driver-mounts-839927                                                                                                                                                                                                                     │ disable-driver-mounts-839927 │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │ 20 Nov 25 21:13 UTC │
	│ start   │ -p default-k8s-diff-port-588348 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-588348 │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │ 20 Nov 25 21:14 UTC │
	│ image   │ embed-certs-121127 image list --format=json                                                                                                                                                                                                         │ embed-certs-121127           │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │ 20 Nov 25 21:13 UTC │
	│ pause   │ -p embed-certs-121127 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-121127           │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │ 20 Nov 25 21:13 UTC │
	│ unpause │ -p embed-certs-121127 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-121127           │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │ 20 Nov 25 21:13 UTC │
	│ delete  │ -p embed-certs-121127                                                                                                                                                                                                                               │ embed-certs-121127           │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │ 20 Nov 25 21:13 UTC │
	│ delete  │ -p embed-certs-121127                                                                                                                                                                                                                               │ embed-certs-121127           │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │ 20 Nov 25 21:13 UTC │
	│ start   │ -p newest-cni-701288 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-701288            │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │ 20 Nov 25 21:14 UTC │
	│ addons  │ enable metrics-server -p newest-cni-701288 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-701288            │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │ 20 Nov 25 21:14 UTC │
	│ stop    │ -p newest-cni-701288 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-701288            │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │ 20 Nov 25 21:14 UTC │
	│ addons  │ enable dashboard -p newest-cni-701288 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-701288            │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │ 20 Nov 25 21:14 UTC │
	│ start   │ -p newest-cni-701288 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-701288            │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │ 20 Nov 25 21:14 UTC │
	│ image   │ newest-cni-701288 image list --format=json                                                                                                                                                                                                          │ newest-cni-701288            │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │ 20 Nov 25 21:14 UTC │
	│ pause   │ -p newest-cni-701288 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-701288            │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │ 20 Nov 25 21:14 UTC │
	│ unpause │ -p newest-cni-701288 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-701288            │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │ 20 Nov 25 21:14 UTC │
	│ delete  │ -p newest-cni-701288                                                                                                                                                                                                                                │ newest-cni-701288            │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │ 20 Nov 25 21:14 UTC │
	│ delete  │ -p newest-cni-701288                                                                                                                                                                                                                                │ newest-cni-701288            │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │ 20 Nov 25 21:14 UTC │
	│ start   │ -p auto-448616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                       │ auto-448616                  │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:14:50
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:14:50.605146  240268 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:14:50.605380  240268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:14:50.605409  240268 out.go:374] Setting ErrFile to fd 2...
	I1120 21:14:50.605430  240268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:14:50.605780  240268 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-2300/.minikube/bin
	I1120 21:14:50.606301  240268 out.go:368] Setting JSON to false
	I1120 21:14:50.607399  240268 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3440,"bootTime":1763669851,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1120 21:14:50.607514  240268 start.go:143] virtualization:  
	I1120 21:14:50.611466  240268 out.go:179] * [auto-448616] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 21:14:50.615728  240268 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:14:50.615815  240268 notify.go:221] Checking for updates...
	I1120 21:14:50.621972  240268 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:14:50.625028  240268 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-2300/kubeconfig
	I1120 21:14:50.628058  240268 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-2300/.minikube
	I1120 21:14:50.631232  240268 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 21:14:50.634220  240268 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:14:50.638063  240268 config.go:182] Loaded profile config "default-k8s-diff-port-588348": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 21:14:50.638187  240268 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:14:50.670309  240268 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 21:14:50.670463  240268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:14:50.730056  240268 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 21:14:50.720119267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:14:50.730177  240268 docker.go:319] overlay module found
	I1120 21:14:50.735288  240268 out.go:179] * Using the docker driver based on user configuration
	I1120 21:14:50.738286  240268 start.go:309] selected driver: docker
	I1120 21:14:50.738308  240268 start.go:930] validating driver "docker" against <nil>
	I1120 21:14:50.738322  240268 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:14:50.739256  240268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:14:50.801775  240268 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 21:14:50.792636772 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:14:50.801925  240268 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 21:14:50.802157  240268 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:14:50.805237  240268 out.go:179] * Using Docker driver with root privileges
	I1120 21:14:50.808098  240268 cni.go:84] Creating CNI manager for ""
	I1120 21:14:50.808174  240268 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 21:14:50.808189  240268 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1120 21:14:50.808271  240268 start.go:353] cluster config:
	{Name:auto-448616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-448616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:con
tainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:14:50.811414  240268 out.go:179] * Starting "auto-448616" primary control-plane node in "auto-448616" cluster
	I1120 21:14:50.814403  240268 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1120 21:14:50.817659  240268 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:14:50.820503  240268 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1120 21:14:50.820550  240268 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:14:50.820553  240268 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-2300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1120 21:14:50.820650  240268 cache.go:65] Caching tarball of preloaded images
	I1120 21:14:50.820758  240268 preload.go:238] Found /home/jenkins/minikube-integration/21923-2300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1120 21:14:50.820769  240268 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1120 21:14:50.820949  240268 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/auto-448616/config.json ...
	I1120 21:14:50.820976  240268 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/auto-448616/config.json: {Name:mke8706baf99c998eac2958111693dc037c4c641 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:14:50.844842  240268 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:14:50.844867  240268 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:14:50.844880  240268 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:14:50.844903  240268 start.go:360] acquireMachinesLock for auto-448616: {Name:mk6b2a527a644286d225183467eb14d74f21ac12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:14:50.845018  240268 start.go:364] duration metric: took 94.09µs to acquireMachinesLock for "auto-448616"
	I1120 21:14:50.845050  240268 start.go:93] Provisioning new machine with config: &{Name:auto-448616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-448616 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1120 21:14:50.845124  240268 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	26566cb56867b       1611cd07b61d5       8 seconds ago        Running             busybox                   0                   12c3c8da49dc9       busybox                                                default
	cbea74d47f159       138784d87c9c5       14 seconds ago       Running             coredns                   0                   ef0b97e07cb52       coredns-66bc5c9577-7976f                               kube-system
	388690ebc3ca5       ba04bb24b9575       14 seconds ago       Running             storage-provisioner       0                   5983e840d52f1       storage-provisioner                                    kube-system
	767f3bc76acb6       b1a8c6f707935       56 seconds ago       Running             kindnet-cni               0                   2e4177f344198       kindnet-jjjzp                                          kube-system
	daaad96aa53bf       05baa95f5142d       56 seconds ago       Running             kube-proxy                0                   67033f71998cf       kube-proxy-px884                                       kube-system
	b9a7cebee83d9       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   1e5d14bc2c632       kube-apiserver-default-k8s-diff-port-588348            kube-system
	761b4e9be37fd       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   e14f999f42e58       kube-controller-manager-default-k8s-diff-port-588348   kube-system
	2c9037165a1bd       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   b6ecc25525548       kube-scheduler-default-k8s-diff-port-588348            kube-system
	7e7c2f9f39250       a1894772a478e       About a minute ago   Running             etcd                      0                   c4e6f92eb5f67       etcd-default-k8s-diff-port-588348                      kube-system
	
	
	==> containerd <==
	Nov 20 21:14:41 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:41.335865311Z" level=info msg="connecting to shim 388690ebc3ca5f10870951db14bb6367b336590643da58189a0a2e1229b2140c" address="unix:///run/containerd/s/d69bf8d25f4b3d2a2a12e28a9f5e64d1d83a96dc31a6d9c05cbe4a19cdd1a63b" protocol=ttrpc version=3
	Nov 20 21:14:41 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:41.411307184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7976f,Uid:d31e708d-c6bd-4313-81c8-4094f93ca502,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef0b97e07cb5221195a4cab8962cde00e7828bab3d497c48a5a77bb819f3d2d1\""
	Nov 20 21:14:41 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:41.421310210Z" level=info msg="CreateContainer within sandbox \"ef0b97e07cb5221195a4cab8962cde00e7828bab3d497c48a5a77bb819f3d2d1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 20 21:14:41 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:41.449470992Z" level=info msg="Container cbea74d47f159d1a24b96ceb695839a4e8c70c67f0fb89fe6c5a8b4fbdd3681d: CDI devices from CRI Config.CDIDevices: []"
	Nov 20 21:14:41 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:41.468916419Z" level=info msg="CreateContainer within sandbox \"ef0b97e07cb5221195a4cab8962cde00e7828bab3d497c48a5a77bb819f3d2d1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cbea74d47f159d1a24b96ceb695839a4e8c70c67f0fb89fe6c5a8b4fbdd3681d\""
	Nov 20 21:14:41 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:41.483453684Z" level=info msg="StartContainer for \"cbea74d47f159d1a24b96ceb695839a4e8c70c67f0fb89fe6c5a8b4fbdd3681d\""
	Nov 20 21:14:41 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:41.493719640Z" level=info msg="connecting to shim cbea74d47f159d1a24b96ceb695839a4e8c70c67f0fb89fe6c5a8b4fbdd3681d" address="unix:///run/containerd/s/ecedb5742610c03c59a36fb40ff12e65e539ae68d3e2a2911d6e50d30c1ae997" protocol=ttrpc version=3
	Nov 20 21:14:41 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:41.520413244Z" level=info msg="StartContainer for \"388690ebc3ca5f10870951db14bb6367b336590643da58189a0a2e1229b2140c\" returns successfully"
	Nov 20 21:14:41 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:41.633593366Z" level=info msg="StartContainer for \"cbea74d47f159d1a24b96ceb695839a4e8c70c67f0fb89fe6c5a8b4fbdd3681d\" returns successfully"
	Nov 20 21:14:45 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:45.083415580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:53e62629-f375-4ed6-9baf-052a28f0f0fc,Namespace:default,Attempt:0,}"
	Nov 20 21:14:45 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:45.241929214Z" level=info msg="connecting to shim 12c3c8da49dc9d6b31a0512c4d67e35fe6ae31a9796ece30ede5866bf8a97522" address="unix:///run/containerd/s/47fbb307823cc761aa21f6ca15fca02c598e231827abc18963f800fff6190b63" namespace=k8s.io protocol=ttrpc version=3
	Nov 20 21:14:45 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:45.377695321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:53e62629-f375-4ed6-9baf-052a28f0f0fc,Namespace:default,Attempt:0,} returns sandbox id \"12c3c8da49dc9d6b31a0512c4d67e35fe6ae31a9796ece30ede5866bf8a97522\""
	Nov 20 21:14:45 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:45.381830323Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 20 21:14:47 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:47.516297013Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 21:14:47 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:47.518554409Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937191"
	Nov 20 21:14:47 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:47.520980859Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 21:14:47 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:47.524863352Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 21:14:47 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:47.525671515Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.143794365s"
	Nov 20 21:14:47 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:47.525809651Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 20 21:14:47 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:47.535706887Z" level=info msg="CreateContainer within sandbox \"12c3c8da49dc9d6b31a0512c4d67e35fe6ae31a9796ece30ede5866bf8a97522\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 20 21:14:47 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:47.575250439Z" level=info msg="Container 26566cb56867bc0ea72aad33dec15319c7a87a702f2a115781ce8af8cc8be488: CDI devices from CRI Config.CDIDevices: []"
	Nov 20 21:14:47 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:47.593516062Z" level=info msg="CreateContainer within sandbox \"12c3c8da49dc9d6b31a0512c4d67e35fe6ae31a9796ece30ede5866bf8a97522\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"26566cb56867bc0ea72aad33dec15319c7a87a702f2a115781ce8af8cc8be488\""
	Nov 20 21:14:47 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:47.595511159Z" level=info msg="StartContainer for \"26566cb56867bc0ea72aad33dec15319c7a87a702f2a115781ce8af8cc8be488\""
	Nov 20 21:14:47 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:47.596511252Z" level=info msg="connecting to shim 26566cb56867bc0ea72aad33dec15319c7a87a702f2a115781ce8af8cc8be488" address="unix:///run/containerd/s/47fbb307823cc761aa21f6ca15fca02c598e231827abc18963f800fff6190b63" protocol=ttrpc version=3
	Nov 20 21:14:47 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:47.712853028Z" level=info msg="StartContainer for \"26566cb56867bc0ea72aad33dec15319c7a87a702f2a115781ce8af8cc8be488\" returns successfully"
	
	
	==> coredns [cbea74d47f159d1a24b96ceb695839a4e8c70c67f0fb89fe6c5a8b4fbdd3681d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48000 - 29909 "HINFO IN 8817772603551219641.8986221277654578198. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.039350194s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-588348
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-588348
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=default-k8s-diff-port-588348
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_13_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:13:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-588348
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:14:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:14:54 +0000   Thu, 20 Nov 2025 21:13:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:14:54 +0000   Thu, 20 Nov 2025 21:13:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:14:54 +0000   Thu, 20 Nov 2025 21:13:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:14:54 +0000   Thu, 20 Nov 2025 21:14:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-588348
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                05c4fdec-7aa8-4d94-adfa-f2fb740f6a80
	  Boot ID:                    0cc3a06a-788d-45d4-8fff-2131330a9ee0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-7976f                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     58s
	  kube-system                 etcd-default-k8s-diff-port-588348                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         63s
	  kube-system                 kindnet-jjjzp                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      58s
	  kube-system                 kube-apiserver-default-k8s-diff-port-588348             250m (12%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-588348    200m (10%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-proxy-px884                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-scheduler-default-k8s-diff-port-588348             100m (5%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 55s                kube-proxy       
	  Normal   NodeHasSufficientMemory  73s (x8 over 73s)  kubelet          Node default-k8s-diff-port-588348 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    73s (x8 over 73s)  kubelet          Node default-k8s-diff-port-588348 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     73s (x7 over 73s)  kubelet          Node default-k8s-diff-port-588348 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  73s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 64s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 64s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  63s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  63s                kubelet          Node default-k8s-diff-port-588348 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s                kubelet          Node default-k8s-diff-port-588348 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s                kubelet          Node default-k8s-diff-port-588348 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           59s                node-controller  Node default-k8s-diff-port-588348 event: Registered Node default-k8s-diff-port-588348 in Controller
	  Normal   NodeReady                16s                kubelet          Node default-k8s-diff-port-588348 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov20 20:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014399] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.498138] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.765613] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.782554] kauditd_printk_skb: 36 callbacks suppressed
	[Nov20 20:40] hrtimer: interrupt took 1888672 ns
	
	
	==> etcd [7e7c2f9f3925083cc407239a2a96f89b556cdfe9ab7f984754cc0d6c7ac818ec] <==
	{"level":"warn","ts":"2025-11-20T21:13:46.654756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:46.684205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:46.718948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:46.771319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:46.780549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:46.810909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:46.832495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:46.863713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:46.882526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:46.896978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:46.913199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:46.938387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:46.952838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:46.971089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:46.992518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:47.009328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:47.037780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:47.042337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:47.061917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:47.086253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:47.100795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:47.122014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:47.182565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:47.258755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49144","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-20T21:14:00.218359Z","caller":"traceutil/trace.go:172","msg":"trace[1039107522] transaction","detail":"{read_only:false; response_revision:406; number_of_response:1; }","duration":"102.280859ms","start":"2025-11-20T21:14:00.116050Z","end":"2025-11-20T21:14:00.218331Z","steps":["trace[1039107522] 'process raft request'  (duration: 14.154848ms)","trace[1039107522] 'store kv pair into bolt db' {req_type:put; key:/registry/events/kube-system/kindnet-jjjzp.1879d377afa4124d; req_size:734; } (duration: 87.233966ms)"],"step_count":2}
	
	
	==> kernel <==
	 21:14:56 up 57 min,  0 user,  load average: 4.52, 4.11, 3.25
	Linux default-k8s-diff-port-588348 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [767f3bc76acb6affee6a0cdcd5a54c38538eca67ec8365a2d9c2670788965a37] <==
	I1120 21:14:00.328756       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 21:14:00.338968       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1120 21:14:00.339150       1 main.go:148] setting mtu 1500 for CNI 
	I1120 21:14:00.339178       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 21:14:00.339200       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T21:14:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 21:14:00.638517       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 21:14:00.654589       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 21:14:00.654637       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 21:14:00.654820       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1120 21:14:30.639247       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1120 21:14:30.652063       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1120 21:14:30.652178       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1120 21:14:30.652260       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1120 21:14:31.654977       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 21:14:31.655205       1 metrics.go:72] Registering metrics
	I1120 21:14:31.655392       1 controller.go:711] "Syncing nftables rules"
	I1120 21:14:40.642490       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 21:14:40.642548       1 main.go:301] handling current node
	I1120 21:14:50.637687       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 21:14:50.637911       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b9a7cebee83d9b6ff3498d725e3879fb497df6d050bebbc4589c793805606f2d] <==
	I1120 21:13:49.827059       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:13:49.829404       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1120 21:13:49.859755       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1120 21:13:49.861270       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:13:49.864690       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1120 21:13:49.867674       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 21:13:49.941056       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:13:50.085300       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1120 21:13:50.122570       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1120 21:13:50.122598       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 21:13:51.595970       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 21:13:51.672072       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 21:13:51.798011       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1120 21:13:51.810479       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1120 21:13:51.811667       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 21:13:51.817040       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 21:13:52.590770       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 21:13:52.788313       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 21:13:52.819203       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1120 21:13:52.842286       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 21:13:58.423120       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 21:13:58.546779       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1120 21:13:58.619668       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:13:58.683775       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1120 21:14:54.894122       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:46634: use of closed network connection
	
	
	==> kube-controller-manager [761b4e9be37fdf40acb216cebed3ff0a7936e6b61ca066eee7515b800ed94627] <==
	I1120 21:13:57.692541       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1120 21:13:57.694237       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1120 21:13:57.694259       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1120 21:13:57.694273       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1120 21:13:57.703725       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1120 21:13:57.704037       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:13:57.704325       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 21:13:57.704465       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 21:13:57.709353       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1120 21:13:57.716011       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:13:57.725791       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1120 21:13:57.728444       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:13:57.729771       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1120 21:13:57.730033       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 21:13:57.732836       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 21:13:57.733007       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1120 21:13:57.733155       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1120 21:13:57.733280       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1120 21:13:57.733416       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 21:13:57.735669       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1120 21:13:57.736905       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1120 21:13:57.747083       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 21:13:57.760408       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-588348" podCIDRs=["10.244.0.0/24"]
	I1120 21:13:57.774712       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:14:42.689167       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [daaad96aa53bfcd2f844e5723b9af2c6fe4e178b67ede9e409781d2792172075] <==
	I1120 21:13:59.967642       1 server_linux.go:53] "Using iptables proxy"
	I1120 21:14:00.056621       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 21:14:00.157796       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 21:14:00.157849       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1120 21:14:00.157930       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 21:14:00.399891       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:14:00.399954       1 server_linux.go:132] "Using iptables Proxier"
	I1120 21:14:00.449149       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 21:14:00.452567       1 server.go:527] "Version info" version="v1.34.1"
	I1120 21:14:00.452606       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:14:00.464326       1 config.go:200] "Starting service config controller"
	I1120 21:14:00.464352       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 21:14:00.464376       1 config.go:106] "Starting endpoint slice config controller"
	I1120 21:14:00.464381       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 21:14:00.464393       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 21:14:00.464396       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 21:14:00.477310       1 config.go:309] "Starting node config controller"
	I1120 21:14:00.477335       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 21:14:00.477344       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 21:14:00.566507       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 21:14:00.566555       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 21:14:00.566602       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2c9037165a1bd34334e775a68eaa1d0d738c396f7bb845bb5ac3f4b0271ea2bd] <==
	I1120 21:13:47.426704       1 serving.go:386] Generated self-signed cert in-memory
	I1120 21:13:51.762552       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1120 21:13:51.762784       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:13:51.768626       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1120 21:13:51.768861       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1120 21:13:51.769040       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:13:51.769181       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:13:51.769163       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 21:13:51.769451       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 21:13:51.769475       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 21:13:51.769184       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1120 21:13:51.869561       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 21:13:51.869575       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:13:51.869600       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Nov 20 21:13:53 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:13:53.958865    1497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-588348" podStartSLOduration=0.958847089 podStartE2EDuration="958.847089ms" podCreationTimestamp="2025-11-20 21:13:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:13:53.956313208 +0000 UTC m=+1.320610732" watchObservedRunningTime="2025-11-20 21:13:53.958847089 +0000 UTC m=+1.323144605"
	Nov 20 21:13:53 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:13:53.988191    1497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-588348" podStartSLOduration=0.98817208 podStartE2EDuration="988.17208ms" podCreationTimestamp="2025-11-20 21:13:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:13:53.971378324 +0000 UTC m=+1.335675848" watchObservedRunningTime="2025-11-20 21:13:53.98817208 +0000 UTC m=+1.352469604"
	Nov 20 21:13:54 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:13:54.005734    1497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-588348" podStartSLOduration=1.00571506 podStartE2EDuration="1.00571506s" podCreationTimestamp="2025-11-20 21:13:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:13:53.988821996 +0000 UTC m=+1.353119520" watchObservedRunningTime="2025-11-20 21:13:54.00571506 +0000 UTC m=+1.370012576"
	Nov 20 21:13:54 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:13:54.033148    1497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-588348" podStartSLOduration=1.033075584 podStartE2EDuration="1.033075584s" podCreationTimestamp="2025-11-20 21:13:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:13:54.006327264 +0000 UTC m=+1.370624797" watchObservedRunningTime="2025-11-20 21:13:54.033075584 +0000 UTC m=+1.397373117"
	Nov 20 21:13:57 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:13:57.779640    1497 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 20 21:13:57 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:13:57.781291    1497 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 20 21:13:58 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:13:58.742703    1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/78bc23e7-7a17-4da9-8a05-c0489d7e231e-cni-cfg\") pod \"kindnet-jjjzp\" (UID: \"78bc23e7-7a17-4da9-8a05-c0489d7e231e\") " pod="kube-system/kindnet-jjjzp"
	Nov 20 21:13:58 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:13:58.742759    1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7fv2\" (UniqueName: \"kubernetes.io/projected/78bc23e7-7a17-4da9-8a05-c0489d7e231e-kube-api-access-w7fv2\") pod \"kindnet-jjjzp\" (UID: \"78bc23e7-7a17-4da9-8a05-c0489d7e231e\") " pod="kube-system/kindnet-jjjzp"
	Nov 20 21:13:58 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:13:58.742784    1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78bc23e7-7a17-4da9-8a05-c0489d7e231e-xtables-lock\") pod \"kindnet-jjjzp\" (UID: \"78bc23e7-7a17-4da9-8a05-c0489d7e231e\") " pod="kube-system/kindnet-jjjzp"
	Nov 20 21:13:58 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:13:58.742803    1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78bc23e7-7a17-4da9-8a05-c0489d7e231e-lib-modules\") pod \"kindnet-jjjzp\" (UID: \"78bc23e7-7a17-4da9-8a05-c0489d7e231e\") " pod="kube-system/kindnet-jjjzp"
	Nov 20 21:13:58 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:13:58.843325    1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4477867-32d2-49c8-ad38-67a97a6a0138-xtables-lock\") pod \"kube-proxy-px884\" (UID: \"d4477867-32d2-49c8-ad38-67a97a6a0138\") " pod="kube-system/kube-proxy-px884"
	Nov 20 21:13:58 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:13:58.843383    1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4477867-32d2-49c8-ad38-67a97a6a0138-lib-modules\") pod \"kube-proxy-px884\" (UID: \"d4477867-32d2-49c8-ad38-67a97a6a0138\") " pod="kube-system/kube-proxy-px884"
	Nov 20 21:13:58 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:13:58.843417    1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz5nt\" (UniqueName: \"kubernetes.io/projected/d4477867-32d2-49c8-ad38-67a97a6a0138-kube-api-access-nz5nt\") pod \"kube-proxy-px884\" (UID: \"d4477867-32d2-49c8-ad38-67a97a6a0138\") " pod="kube-system/kube-proxy-px884"
	Nov 20 21:13:58 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:13:58.843469    1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d4477867-32d2-49c8-ad38-67a97a6a0138-kube-proxy\") pod \"kube-proxy-px884\" (UID: \"d4477867-32d2-49c8-ad38-67a97a6a0138\") " pod="kube-system/kube-proxy-px884"
	Nov 20 21:13:58 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:13:58.920184    1497 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 20 21:14:00 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:14:00.230879    1497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-px884" podStartSLOduration=2.230857368 podStartE2EDuration="2.230857368s" podCreationTimestamp="2025-11-20 21:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:14:00.230352472 +0000 UTC m=+7.594650447" watchObservedRunningTime="2025-11-20 21:14:00.230857368 +0000 UTC m=+7.595154883"
	Nov 20 21:14:00 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:14:00.374458    1497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-jjjzp" podStartSLOduration=2.37441092 podStartE2EDuration="2.37441092s" podCreationTimestamp="2025-11-20 21:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:14:00.371674532 +0000 UTC m=+7.735972056" watchObservedRunningTime="2025-11-20 21:14:00.37441092 +0000 UTC m=+7.738708444"
	Nov 20 21:14:40 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:14:40.656760    1497 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 20 21:14:40 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:14:40.787434    1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fbb9a153-aade-496a-bfdc-2073f2f51065-tmp\") pod \"storage-provisioner\" (UID: \"fbb9a153-aade-496a-bfdc-2073f2f51065\") " pod="kube-system/storage-provisioner"
	Nov 20 21:14:40 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:14:40.787637    1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhck7\" (UniqueName: \"kubernetes.io/projected/fbb9a153-aade-496a-bfdc-2073f2f51065-kube-api-access-dhck7\") pod \"storage-provisioner\" (UID: \"fbb9a153-aade-496a-bfdc-2073f2f51065\") " pod="kube-system/storage-provisioner"
	Nov 20 21:14:40 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:14:40.888722    1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzvnz\" (UniqueName: \"kubernetes.io/projected/d31e708d-c6bd-4313-81c8-4094f93ca502-kube-api-access-xzvnz\") pod \"coredns-66bc5c9577-7976f\" (UID: \"d31e708d-c6bd-4313-81c8-4094f93ca502\") " pod="kube-system/coredns-66bc5c9577-7976f"
	Nov 20 21:14:40 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:14:40.888931    1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d31e708d-c6bd-4313-81c8-4094f93ca502-config-volume\") pod \"coredns-66bc5c9577-7976f\" (UID: \"d31e708d-c6bd-4313-81c8-4094f93ca502\") " pod="kube-system/coredns-66bc5c9577-7976f"
	Nov 20 21:14:42 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:14:42.277552    1497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7976f" podStartSLOduration=44.277533636 podStartE2EDuration="44.277533636s" podCreationTimestamp="2025-11-20 21:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:14:42.276970376 +0000 UTC m=+49.641267892" watchObservedRunningTime="2025-11-20 21:14:42.277533636 +0000 UTC m=+49.641831152"
	Nov 20 21:14:42 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:14:42.350515    1497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.350494683 podStartE2EDuration="42.350494683s" podCreationTimestamp="2025-11-20 21:14:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:14:42.327500849 +0000 UTC m=+49.691798365" watchObservedRunningTime="2025-11-20 21:14:42.350494683 +0000 UTC m=+49.714792215"
	Nov 20 21:14:44 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:14:44.829673    1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sshwt\" (UniqueName: \"kubernetes.io/projected/53e62629-f375-4ed6-9baf-052a28f0f0fc-kube-api-access-sshwt\") pod \"busybox\" (UID: \"53e62629-f375-4ed6-9baf-052a28f0f0fc\") " pod="default/busybox"
	
	
	==> storage-provisioner [388690ebc3ca5f10870951db14bb6367b336590643da58189a0a2e1229b2140c] <==
	I1120 21:14:41.481029       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1120 21:14:41.529260       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 21:14:41.536237       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1120 21:14:41.571314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:41.577531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 21:14:41.577908       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 21:14:41.580377       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-588348_38aa07ec-3023-417c-aff5-4238ef75ffad!
	I1120 21:14:41.585409       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b0f4bf38-dfd5-4644-b843-fc2a2b0abb71", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-588348_38aa07ec-3023-417c-aff5-4238ef75ffad became leader
	W1120 21:14:41.597625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:41.623404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 21:14:41.680964       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-588348_38aa07ec-3023-417c-aff5-4238ef75ffad!
	W1120 21:14:43.626776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:43.634151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:45.637127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:45.642174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:47.647025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:47.657863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:49.661491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:49.666395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:51.669387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:51.677147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:53.680928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:53.686508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:55.691272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:55.704705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-588348 -n default-k8s-diff-port-588348
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-588348 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-588348
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-588348:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "80a56514fc4d1c64f088f0462702334929bed729f22968f906e9688de6cbc854",
	        "Created": "2025-11-20T21:13:25.451039613Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 230410,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:13:25.522234223Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/80a56514fc4d1c64f088f0462702334929bed729f22968f906e9688de6cbc854/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/80a56514fc4d1c64f088f0462702334929bed729f22968f906e9688de6cbc854/hostname",
	        "HostsPath": "/var/lib/docker/containers/80a56514fc4d1c64f088f0462702334929bed729f22968f906e9688de6cbc854/hosts",
	        "LogPath": "/var/lib/docker/containers/80a56514fc4d1c64f088f0462702334929bed729f22968f906e9688de6cbc854/80a56514fc4d1c64f088f0462702334929bed729f22968f906e9688de6cbc854-json.log",
	        "Name": "/default-k8s-diff-port-588348",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-588348:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-588348",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "80a56514fc4d1c64f088f0462702334929bed729f22968f906e9688de6cbc854",
	                "LowerDir": "/var/lib/docker/overlay2/2714d5748cecfc66f983f3029497fa1155487093ea18341d0874b46d82487abb-init/diff:/var/lib/docker/overlay2/5105da773b59b243b777c3c083d206b6a741bd11ebc5a0283799917fe36ebbb2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2714d5748cecfc66f983f3029497fa1155487093ea18341d0874b46d82487abb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2714d5748cecfc66f983f3029497fa1155487093ea18341d0874b46d82487abb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2714d5748cecfc66f983f3029497fa1155487093ea18341d0874b46d82487abb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-588348",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-588348/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-588348",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-588348",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-588348",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1a4c8702905e517624184358309ab458ec5478ff3022cd210ba2571d4146b098",
	            "SandboxKey": "/var/run/docker/netns/1a4c8702905e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-588348": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:42:e2:da:01:dc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cd51cbccc34af9946874f089ffbed15390ea1be54b0475ffe986da80be71a59e",
	                    "EndpointID": "e00178d954410c7645268591bb5d9b63873939d4661fb6fad92a2cb11721946f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-588348",
	                        "80a56514fc4d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-588348 -n default-k8s-diff-port-588348
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-588348 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-588348 logs -n 25: (1.443046009s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ addons  │ enable dashboard -p embed-certs-121127 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-121127           │ jenkins │ v1.37.0 │ 20 Nov 25 21:12 UTC │ 20 Nov 25 21:12 UTC │
	│ start   │ -p embed-certs-121127 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-121127           │ jenkins │ v1.37.0 │ 20 Nov 25 21:12 UTC │ 20 Nov 25 21:13 UTC │
	│ image   │ no-preload-882483 image list --format=json                                                                                                                                                                                                          │ no-preload-882483            │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │ 20 Nov 25 21:13 UTC │
	│ pause   │ -p no-preload-882483 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-882483            │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │ 20 Nov 25 21:13 UTC │
	│ unpause │ -p no-preload-882483 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-882483            │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │ 20 Nov 25 21:13 UTC │
	│ delete  │ -p no-preload-882483                                                                                                                                                                                                                                │ no-preload-882483            │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │ 20 Nov 25 21:13 UTC │
	│ delete  │ -p no-preload-882483                                                                                                                                                                                                                                │ no-preload-882483            │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │ 20 Nov 25 21:13 UTC │
	│ delete  │ -p disable-driver-mounts-839927                                                                                                                                                                                                                     │ disable-driver-mounts-839927 │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │ 20 Nov 25 21:13 UTC │
	│ start   │ -p default-k8s-diff-port-588348 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-588348 │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │ 20 Nov 25 21:14 UTC │
	│ image   │ embed-certs-121127 image list --format=json                                                                                                                                                                                                         │ embed-certs-121127           │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │ 20 Nov 25 21:13 UTC │
	│ pause   │ -p embed-certs-121127 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-121127           │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │ 20 Nov 25 21:13 UTC │
	│ unpause │ -p embed-certs-121127 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-121127           │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │ 20 Nov 25 21:13 UTC │
	│ delete  │ -p embed-certs-121127                                                                                                                                                                                                                               │ embed-certs-121127           │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │ 20 Nov 25 21:13 UTC │
	│ delete  │ -p embed-certs-121127                                                                                                                                                                                                                               │ embed-certs-121127           │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │ 20 Nov 25 21:13 UTC │
	│ start   │ -p newest-cni-701288 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-701288            │ jenkins │ v1.37.0 │ 20 Nov 25 21:13 UTC │ 20 Nov 25 21:14 UTC │
	│ addons  │ enable metrics-server -p newest-cni-701288 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-701288            │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │ 20 Nov 25 21:14 UTC │
	│ stop    │ -p newest-cni-701288 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-701288            │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │ 20 Nov 25 21:14 UTC │
	│ addons  │ enable dashboard -p newest-cni-701288 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-701288            │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │ 20 Nov 25 21:14 UTC │
	│ start   │ -p newest-cni-701288 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-701288            │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │ 20 Nov 25 21:14 UTC │
	│ image   │ newest-cni-701288 image list --format=json                                                                                                                                                                                                          │ newest-cni-701288            │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │ 20 Nov 25 21:14 UTC │
	│ pause   │ -p newest-cni-701288 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-701288            │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │ 20 Nov 25 21:14 UTC │
	│ unpause │ -p newest-cni-701288 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-701288            │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │ 20 Nov 25 21:14 UTC │
	│ delete  │ -p newest-cni-701288                                                                                                                                                                                                                                │ newest-cni-701288            │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │ 20 Nov 25 21:14 UTC │
	│ delete  │ -p newest-cni-701288                                                                                                                                                                                                                                │ newest-cni-701288            │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │ 20 Nov 25 21:14 UTC │
	│ start   │ -p auto-448616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                       │ auto-448616                  │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:14:50
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:14:50.605146  240268 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:14:50.605380  240268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:14:50.605409  240268 out.go:374] Setting ErrFile to fd 2...
	I1120 21:14:50.605430  240268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:14:50.605780  240268 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-2300/.minikube/bin
	I1120 21:14:50.606301  240268 out.go:368] Setting JSON to false
	I1120 21:14:50.607399  240268 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3440,"bootTime":1763669851,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1120 21:14:50.607514  240268 start.go:143] virtualization:  
	I1120 21:14:50.611466  240268 out.go:179] * [auto-448616] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 21:14:50.615728  240268 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:14:50.615815  240268 notify.go:221] Checking for updates...
	I1120 21:14:50.621972  240268 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:14:50.625028  240268 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-2300/kubeconfig
	I1120 21:14:50.628058  240268 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-2300/.minikube
	I1120 21:14:50.631232  240268 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 21:14:50.634220  240268 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:14:50.638063  240268 config.go:182] Loaded profile config "default-k8s-diff-port-588348": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 21:14:50.638187  240268 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:14:50.670309  240268 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 21:14:50.670463  240268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:14:50.730056  240268 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 21:14:50.720119267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:14:50.730177  240268 docker.go:319] overlay module found
	I1120 21:14:50.735288  240268 out.go:179] * Using the docker driver based on user configuration
	I1120 21:14:50.738286  240268 start.go:309] selected driver: docker
	I1120 21:14:50.738308  240268 start.go:930] validating driver "docker" against <nil>
	I1120 21:14:50.738322  240268 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:14:50.739256  240268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:14:50.801775  240268 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-20 21:14:50.792636772 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:14:50.801925  240268 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 21:14:50.802157  240268 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:14:50.805237  240268 out.go:179] * Using Docker driver with root privileges
	I1120 21:14:50.808098  240268 cni.go:84] Creating CNI manager for ""
	I1120 21:14:50.808174  240268 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 21:14:50.808189  240268 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1120 21:14:50.808271  240268 start.go:353] cluster config:
	{Name:auto-448616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-448616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:con
tainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:14:50.811414  240268 out.go:179] * Starting "auto-448616" primary control-plane node in "auto-448616" cluster
	I1120 21:14:50.814403  240268 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1120 21:14:50.817659  240268 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:14:50.820503  240268 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1120 21:14:50.820550  240268 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:14:50.820553  240268 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-2300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1120 21:14:50.820650  240268 cache.go:65] Caching tarball of preloaded images
	I1120 21:14:50.820758  240268 preload.go:238] Found /home/jenkins/minikube-integration/21923-2300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1120 21:14:50.820769  240268 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1120 21:14:50.820949  240268 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/auto-448616/config.json ...
	I1120 21:14:50.820976  240268 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/auto-448616/config.json: {Name:mke8706baf99c998eac2958111693dc037c4c641 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:14:50.844842  240268 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:14:50.844867  240268 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:14:50.844880  240268 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:14:50.844903  240268 start.go:360] acquireMachinesLock for auto-448616: {Name:mk6b2a527a644286d225183467eb14d74f21ac12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:14:50.845018  240268 start.go:364] duration metric: took 94.09µs to acquireMachinesLock for "auto-448616"
	I1120 21:14:50.845050  240268 start.go:93] Provisioning new machine with config: &{Name:auto-448616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-448616 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1120 21:14:50.845124  240268 start.go:125] createHost starting for "" (driver="docker")
	I1120 21:14:50.848616  240268 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1120 21:14:50.848859  240268 start.go:159] libmachine.API.Create for "auto-448616" (driver="docker")
	I1120 21:14:50.848907  240268 client.go:173] LocalClient.Create starting
	I1120 21:14:50.848994  240268 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-2300/.minikube/certs/ca.pem
	I1120 21:14:50.849030  240268 main.go:143] libmachine: Decoding PEM data...
	I1120 21:14:50.849047  240268 main.go:143] libmachine: Parsing certificate...
	I1120 21:14:50.849096  240268 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-2300/.minikube/certs/cert.pem
	I1120 21:14:50.849118  240268 main.go:143] libmachine: Decoding PEM data...
	I1120 21:14:50.849135  240268 main.go:143] libmachine: Parsing certificate...
	I1120 21:14:50.849496  240268 cli_runner.go:164] Run: docker network inspect auto-448616 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1120 21:14:50.866426  240268 cli_runner.go:211] docker network inspect auto-448616 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1120 21:14:50.866553  240268 network_create.go:284] running [docker network inspect auto-448616] to gather additional debugging logs...
	I1120 21:14:50.866579  240268 cli_runner.go:164] Run: docker network inspect auto-448616
	W1120 21:14:50.883961  240268 cli_runner.go:211] docker network inspect auto-448616 returned with exit code 1
	I1120 21:14:50.883994  240268 network_create.go:287] error running [docker network inspect auto-448616]: docker network inspect auto-448616: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-448616 not found
	I1120 21:14:50.884024  240268 network_create.go:289] output of [docker network inspect auto-448616]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-448616 not found
	
	** /stderr **
	I1120 21:14:50.884130  240268 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:14:50.901057  240268 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8f2399b7fac6 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:ce:e1:0f:d8:b1} reservation:<nil>}
	I1120 21:14:50.901425  240268 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-954bfb8e5d57 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:f3:60:ee:cc:b7} reservation:<nil>}
	I1120 21:14:50.901764  240268 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-02e4726a397e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c6:f0:04:c7:8f:fa} reservation:<nil>}
	I1120 21:14:50.902004  240268 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-cd51cbccc34a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:a2:3a:bd:ea:39:a9} reservation:<nil>}
	I1120 21:14:50.902403  240268 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b16b0}
	I1120 21:14:50.902426  240268 network_create.go:124] attempt to create docker network auto-448616 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1120 21:14:50.902529  240268 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-448616 auto-448616
	I1120 21:14:50.961641  240268 network_create.go:108] docker network auto-448616 192.168.85.0/24 created
	I1120 21:14:50.961687  240268 kic.go:121] calculated static IP "192.168.85.2" for the "auto-448616" container
	I1120 21:14:50.961771  240268 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1120 21:14:50.980431  240268 cli_runner.go:164] Run: docker volume create auto-448616 --label name.minikube.sigs.k8s.io=auto-448616 --label created_by.minikube.sigs.k8s.io=true
	I1120 21:14:50.999656  240268 oci.go:103] Successfully created a docker volume auto-448616
	I1120 21:14:50.999737  240268 cli_runner.go:164] Run: docker run --rm --name auto-448616-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-448616 --entrypoint /usr/bin/test -v auto-448616:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1120 21:14:51.571642  240268 oci.go:107] Successfully prepared a docker volume auto-448616
	I1120 21:14:51.571731  240268 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1120 21:14:51.571743  240268 kic.go:194] Starting extracting preloaded images to volume ...
	I1120 21:14:51.571809  240268 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-2300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v auto-448616:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	26566cb56867b       1611cd07b61d5       11 seconds ago       Running             busybox                   0                   12c3c8da49dc9       busybox                                                default
	cbea74d47f159       138784d87c9c5       17 seconds ago       Running             coredns                   0                   ef0b97e07cb52       coredns-66bc5c9577-7976f                               kube-system
	388690ebc3ca5       ba04bb24b9575       17 seconds ago       Running             storage-provisioner       0                   5983e840d52f1       storage-provisioner                                    kube-system
	767f3bc76acb6       b1a8c6f707935       59 seconds ago       Running             kindnet-cni               0                   2e4177f344198       kindnet-jjjzp                                          kube-system
	daaad96aa53bf       05baa95f5142d       59 seconds ago       Running             kube-proxy                0                   67033f71998cf       kube-proxy-px884                                       kube-system
	b9a7cebee83d9       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   1e5d14bc2c632       kube-apiserver-default-k8s-diff-port-588348            kube-system
	761b4e9be37fd       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   e14f999f42e58       kube-controller-manager-default-k8s-diff-port-588348   kube-system
	2c9037165a1bd       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   b6ecc25525548       kube-scheduler-default-k8s-diff-port-588348            kube-system
	7e7c2f9f39250       a1894772a478e       About a minute ago   Running             etcd                      0                   c4e6f92eb5f67       etcd-default-k8s-diff-port-588348                      kube-system
	
	
	==> containerd <==
	Nov 20 21:14:41 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:41.335865311Z" level=info msg="connecting to shim 388690ebc3ca5f10870951db14bb6367b336590643da58189a0a2e1229b2140c" address="unix:///run/containerd/s/d69bf8d25f4b3d2a2a12e28a9f5e64d1d83a96dc31a6d9c05cbe4a19cdd1a63b" protocol=ttrpc version=3
	Nov 20 21:14:41 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:41.411307184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7976f,Uid:d31e708d-c6bd-4313-81c8-4094f93ca502,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef0b97e07cb5221195a4cab8962cde00e7828bab3d497c48a5a77bb819f3d2d1\""
	Nov 20 21:14:41 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:41.421310210Z" level=info msg="CreateContainer within sandbox \"ef0b97e07cb5221195a4cab8962cde00e7828bab3d497c48a5a77bb819f3d2d1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 20 21:14:41 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:41.449470992Z" level=info msg="Container cbea74d47f159d1a24b96ceb695839a4e8c70c67f0fb89fe6c5a8b4fbdd3681d: CDI devices from CRI Config.CDIDevices: []"
	Nov 20 21:14:41 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:41.468916419Z" level=info msg="CreateContainer within sandbox \"ef0b97e07cb5221195a4cab8962cde00e7828bab3d497c48a5a77bb819f3d2d1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cbea74d47f159d1a24b96ceb695839a4e8c70c67f0fb89fe6c5a8b4fbdd3681d\""
	Nov 20 21:14:41 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:41.483453684Z" level=info msg="StartContainer for \"cbea74d47f159d1a24b96ceb695839a4e8c70c67f0fb89fe6c5a8b4fbdd3681d\""
	Nov 20 21:14:41 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:41.493719640Z" level=info msg="connecting to shim cbea74d47f159d1a24b96ceb695839a4e8c70c67f0fb89fe6c5a8b4fbdd3681d" address="unix:///run/containerd/s/ecedb5742610c03c59a36fb40ff12e65e539ae68d3e2a2911d6e50d30c1ae997" protocol=ttrpc version=3
	Nov 20 21:14:41 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:41.520413244Z" level=info msg="StartContainer for \"388690ebc3ca5f10870951db14bb6367b336590643da58189a0a2e1229b2140c\" returns successfully"
	Nov 20 21:14:41 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:41.633593366Z" level=info msg="StartContainer for \"cbea74d47f159d1a24b96ceb695839a4e8c70c67f0fb89fe6c5a8b4fbdd3681d\" returns successfully"
	Nov 20 21:14:45 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:45.083415580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:53e62629-f375-4ed6-9baf-052a28f0f0fc,Namespace:default,Attempt:0,}"
	Nov 20 21:14:45 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:45.241929214Z" level=info msg="connecting to shim 12c3c8da49dc9d6b31a0512c4d67e35fe6ae31a9796ece30ede5866bf8a97522" address="unix:///run/containerd/s/47fbb307823cc761aa21f6ca15fca02c598e231827abc18963f800fff6190b63" namespace=k8s.io protocol=ttrpc version=3
	Nov 20 21:14:45 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:45.377695321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:53e62629-f375-4ed6-9baf-052a28f0f0fc,Namespace:default,Attempt:0,} returns sandbox id \"12c3c8da49dc9d6b31a0512c4d67e35fe6ae31a9796ece30ede5866bf8a97522\""
	Nov 20 21:14:45 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:45.381830323Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 20 21:14:47 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:47.516297013Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 21:14:47 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:47.518554409Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937191"
	Nov 20 21:14:47 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:47.520980859Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 21:14:47 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:47.524863352Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 20 21:14:47 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:47.525671515Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.143794365s"
	Nov 20 21:14:47 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:47.525809651Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 20 21:14:47 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:47.535706887Z" level=info msg="CreateContainer within sandbox \"12c3c8da49dc9d6b31a0512c4d67e35fe6ae31a9796ece30ede5866bf8a97522\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 20 21:14:47 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:47.575250439Z" level=info msg="Container 26566cb56867bc0ea72aad33dec15319c7a87a702f2a115781ce8af8cc8be488: CDI devices from CRI Config.CDIDevices: []"
	Nov 20 21:14:47 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:47.593516062Z" level=info msg="CreateContainer within sandbox \"12c3c8da49dc9d6b31a0512c4d67e35fe6ae31a9796ece30ede5866bf8a97522\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"26566cb56867bc0ea72aad33dec15319c7a87a702f2a115781ce8af8cc8be488\""
	Nov 20 21:14:47 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:47.595511159Z" level=info msg="StartContainer for \"26566cb56867bc0ea72aad33dec15319c7a87a702f2a115781ce8af8cc8be488\""
	Nov 20 21:14:47 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:47.596511252Z" level=info msg="connecting to shim 26566cb56867bc0ea72aad33dec15319c7a87a702f2a115781ce8af8cc8be488" address="unix:///run/containerd/s/47fbb307823cc761aa21f6ca15fca02c598e231827abc18963f800fff6190b63" protocol=ttrpc version=3
	Nov 20 21:14:47 default-k8s-diff-port-588348 containerd[760]: time="2025-11-20T21:14:47.712853028Z" level=info msg="StartContainer for \"26566cb56867bc0ea72aad33dec15319c7a87a702f2a115781ce8af8cc8be488\" returns successfully"
	
	
	==> coredns [cbea74d47f159d1a24b96ceb695839a4e8c70c67f0fb89fe6c5a8b4fbdd3681d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48000 - 29909 "HINFO IN 8817772603551219641.8986221277654578198. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.039350194s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-588348
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-588348
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=default-k8s-diff-port-588348
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_13_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:13:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-588348
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:14:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:14:54 +0000   Thu, 20 Nov 2025 21:13:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:14:54 +0000   Thu, 20 Nov 2025 21:13:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:14:54 +0000   Thu, 20 Nov 2025 21:13:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:14:54 +0000   Thu, 20 Nov 2025 21:14:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-588348
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                05c4fdec-7aa8-4d94-adfa-f2fb740f6a80
	  Boot ID:                    0cc3a06a-788d-45d4-8fff-2131330a9ee0
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  kube-system                 coredns-66bc5c9577-7976f                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     61s
	  kube-system                 etcd-default-k8s-diff-port-588348                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         66s
	  kube-system                 kindnet-jjjzp                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      61s
	  kube-system                 kube-apiserver-default-k8s-diff-port-588348             250m (12%)    0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-588348    200m (10%)    0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kube-proxy-px884                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-scheduler-default-k8s-diff-port-588348             100m (5%)     0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 58s                kube-proxy       
	  Normal   NodeHasSufficientMemory  76s (x8 over 76s)  kubelet          Node default-k8s-diff-port-588348 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    76s (x8 over 76s)  kubelet          Node default-k8s-diff-port-588348 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     76s (x7 over 76s)  kubelet          Node default-k8s-diff-port-588348 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  76s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 67s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 67s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  66s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  66s                kubelet          Node default-k8s-diff-port-588348 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    66s                kubelet          Node default-k8s-diff-port-588348 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     66s                kubelet          Node default-k8s-diff-port-588348 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           62s                node-controller  Node default-k8s-diff-port-588348 event: Registered Node default-k8s-diff-port-588348 in Controller
	  Normal   NodeReady                19s                kubelet          Node default-k8s-diff-port-588348 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov20 20:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014399] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.498138] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.765613] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.782554] kauditd_printk_skb: 36 callbacks suppressed
	[Nov20 20:40] hrtimer: interrupt took 1888672 ns
	
	
	==> etcd [7e7c2f9f3925083cc407239a2a96f89b556cdfe9ab7f984754cc0d6c7ac818ec] <==
	{"level":"warn","ts":"2025-11-20T21:13:46.654756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:46.684205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:46.718948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:46.771319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:46.780549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:46.810909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:46.832495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:46.863713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:46.882526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:46.896978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:46.913199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:46.938387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:46.952838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:46.971089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:46.992518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:47.009328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:47.037780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:47.042337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:47.061917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:47.086253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:47.100795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:47.122014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:47.182565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:13:47.258755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49144","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-20T21:14:00.218359Z","caller":"traceutil/trace.go:172","msg":"trace[1039107522] transaction","detail":"{read_only:false; response_revision:406; number_of_response:1; }","duration":"102.280859ms","start":"2025-11-20T21:14:00.116050Z","end":"2025-11-20T21:14:00.218331Z","steps":["trace[1039107522] 'process raft request'  (duration: 14.154848ms)","trace[1039107522] 'store kv pair into bolt db' {req_type:put; key:/registry/events/kube-system/kindnet-jjjzp.1879d377afa4124d; req_size:734; } (duration: 87.233966ms)"],"step_count":2}
	
	
	==> kernel <==
	 21:14:59 up 57 min,  0 user,  load average: 4.32, 4.08, 3.24
	Linux default-k8s-diff-port-588348 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [767f3bc76acb6affee6a0cdcd5a54c38538eca67ec8365a2d9c2670788965a37] <==
	I1120 21:14:00.328756       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 21:14:00.338968       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1120 21:14:00.339150       1 main.go:148] setting mtu 1500 for CNI 
	I1120 21:14:00.339178       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 21:14:00.339200       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T21:14:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 21:14:00.638517       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 21:14:00.654589       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 21:14:00.654637       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 21:14:00.654820       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1120 21:14:30.639247       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1120 21:14:30.652063       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1120 21:14:30.652178       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1120 21:14:30.652260       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1120 21:14:31.654977       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 21:14:31.655205       1 metrics.go:72] Registering metrics
	I1120 21:14:31.655392       1 controller.go:711] "Syncing nftables rules"
	I1120 21:14:40.642490       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 21:14:40.642548       1 main.go:301] handling current node
	I1120 21:14:50.637687       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 21:14:50.637911       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b9a7cebee83d9b6ff3498d725e3879fb497df6d050bebbc4589c793805606f2d] <==
	I1120 21:13:49.827059       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:13:49.829404       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1120 21:13:49.859755       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1120 21:13:49.861270       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:13:49.864690       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1120 21:13:49.867674       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 21:13:49.941056       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:13:50.085300       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1120 21:13:50.122570       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1120 21:13:50.122598       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 21:13:51.595970       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 21:13:51.672072       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 21:13:51.798011       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1120 21:13:51.810479       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1120 21:13:51.811667       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 21:13:51.817040       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 21:13:52.590770       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 21:13:52.788313       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 21:13:52.819203       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1120 21:13:52.842286       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 21:13:58.423120       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 21:13:58.546779       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1120 21:13:58.619668       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:13:58.683775       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1120 21:14:54.894122       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:46634: use of closed network connection
	
	
	==> kube-controller-manager [761b4e9be37fdf40acb216cebed3ff0a7936e6b61ca066eee7515b800ed94627] <==
	I1120 21:13:57.692541       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1120 21:13:57.694237       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1120 21:13:57.694259       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1120 21:13:57.694273       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1120 21:13:57.703725       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1120 21:13:57.704037       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:13:57.704325       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 21:13:57.704465       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 21:13:57.709353       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1120 21:13:57.716011       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:13:57.725791       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1120 21:13:57.728444       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:13:57.729771       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1120 21:13:57.730033       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 21:13:57.732836       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 21:13:57.733007       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1120 21:13:57.733155       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1120 21:13:57.733280       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1120 21:13:57.733416       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 21:13:57.735669       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1120 21:13:57.736905       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1120 21:13:57.747083       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 21:13:57.760408       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-588348" podCIDRs=["10.244.0.0/24"]
	I1120 21:13:57.774712       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:14:42.689167       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [daaad96aa53bfcd2f844e5723b9af2c6fe4e178b67ede9e409781d2792172075] <==
	I1120 21:13:59.967642       1 server_linux.go:53] "Using iptables proxy"
	I1120 21:14:00.056621       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 21:14:00.157796       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 21:14:00.157849       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1120 21:14:00.157930       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 21:14:00.399891       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:14:00.399954       1 server_linux.go:132] "Using iptables Proxier"
	I1120 21:14:00.449149       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 21:14:00.452567       1 server.go:527] "Version info" version="v1.34.1"
	I1120 21:14:00.452606       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:14:00.464326       1 config.go:200] "Starting service config controller"
	I1120 21:14:00.464352       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 21:14:00.464376       1 config.go:106] "Starting endpoint slice config controller"
	I1120 21:14:00.464381       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 21:14:00.464393       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 21:14:00.464396       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 21:14:00.477310       1 config.go:309] "Starting node config controller"
	I1120 21:14:00.477335       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 21:14:00.477344       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 21:14:00.566507       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 21:14:00.566555       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 21:14:00.566602       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2c9037165a1bd34334e775a68eaa1d0d738c396f7bb845bb5ac3f4b0271ea2bd] <==
	I1120 21:13:47.426704       1 serving.go:386] Generated self-signed cert in-memory
	I1120 21:13:51.762552       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1120 21:13:51.762784       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:13:51.768626       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1120 21:13:51.768861       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1120 21:13:51.769040       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:13:51.769181       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:13:51.769163       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 21:13:51.769451       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 21:13:51.769475       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 21:13:51.769184       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1120 21:13:51.869561       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 21:13:51.869575       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:13:51.869600       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Nov 20 21:13:53 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:13:53.958865    1497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-588348" podStartSLOduration=0.958847089 podStartE2EDuration="958.847089ms" podCreationTimestamp="2025-11-20 21:13:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:13:53.956313208 +0000 UTC m=+1.320610732" watchObservedRunningTime="2025-11-20 21:13:53.958847089 +0000 UTC m=+1.323144605"
	Nov 20 21:13:53 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:13:53.988191    1497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-588348" podStartSLOduration=0.98817208 podStartE2EDuration="988.17208ms" podCreationTimestamp="2025-11-20 21:13:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:13:53.971378324 +0000 UTC m=+1.335675848" watchObservedRunningTime="2025-11-20 21:13:53.98817208 +0000 UTC m=+1.352469604"
	Nov 20 21:13:54 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:13:54.005734    1497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-588348" podStartSLOduration=1.00571506 podStartE2EDuration="1.00571506s" podCreationTimestamp="2025-11-20 21:13:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:13:53.988821996 +0000 UTC m=+1.353119520" watchObservedRunningTime="2025-11-20 21:13:54.00571506 +0000 UTC m=+1.370012576"
	Nov 20 21:13:54 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:13:54.033148    1497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-588348" podStartSLOduration=1.033075584 podStartE2EDuration="1.033075584s" podCreationTimestamp="2025-11-20 21:13:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:13:54.006327264 +0000 UTC m=+1.370624797" watchObservedRunningTime="2025-11-20 21:13:54.033075584 +0000 UTC m=+1.397373117"
	Nov 20 21:13:57 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:13:57.779640    1497 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 20 21:13:57 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:13:57.781291    1497 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 20 21:13:58 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:13:58.742703    1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/78bc23e7-7a17-4da9-8a05-c0489d7e231e-cni-cfg\") pod \"kindnet-jjjzp\" (UID: \"78bc23e7-7a17-4da9-8a05-c0489d7e231e\") " pod="kube-system/kindnet-jjjzp"
	Nov 20 21:13:58 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:13:58.742759    1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7fv2\" (UniqueName: \"kubernetes.io/projected/78bc23e7-7a17-4da9-8a05-c0489d7e231e-kube-api-access-w7fv2\") pod \"kindnet-jjjzp\" (UID: \"78bc23e7-7a17-4da9-8a05-c0489d7e231e\") " pod="kube-system/kindnet-jjjzp"
	Nov 20 21:13:58 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:13:58.742784    1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78bc23e7-7a17-4da9-8a05-c0489d7e231e-xtables-lock\") pod \"kindnet-jjjzp\" (UID: \"78bc23e7-7a17-4da9-8a05-c0489d7e231e\") " pod="kube-system/kindnet-jjjzp"
	Nov 20 21:13:58 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:13:58.742803    1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78bc23e7-7a17-4da9-8a05-c0489d7e231e-lib-modules\") pod \"kindnet-jjjzp\" (UID: \"78bc23e7-7a17-4da9-8a05-c0489d7e231e\") " pod="kube-system/kindnet-jjjzp"
	Nov 20 21:13:58 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:13:58.843325    1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4477867-32d2-49c8-ad38-67a97a6a0138-xtables-lock\") pod \"kube-proxy-px884\" (UID: \"d4477867-32d2-49c8-ad38-67a97a6a0138\") " pod="kube-system/kube-proxy-px884"
	Nov 20 21:13:58 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:13:58.843383    1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4477867-32d2-49c8-ad38-67a97a6a0138-lib-modules\") pod \"kube-proxy-px884\" (UID: \"d4477867-32d2-49c8-ad38-67a97a6a0138\") " pod="kube-system/kube-proxy-px884"
	Nov 20 21:13:58 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:13:58.843417    1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz5nt\" (UniqueName: \"kubernetes.io/projected/d4477867-32d2-49c8-ad38-67a97a6a0138-kube-api-access-nz5nt\") pod \"kube-proxy-px884\" (UID: \"d4477867-32d2-49c8-ad38-67a97a6a0138\") " pod="kube-system/kube-proxy-px884"
	Nov 20 21:13:58 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:13:58.843469    1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d4477867-32d2-49c8-ad38-67a97a6a0138-kube-proxy\") pod \"kube-proxy-px884\" (UID: \"d4477867-32d2-49c8-ad38-67a97a6a0138\") " pod="kube-system/kube-proxy-px884"
	Nov 20 21:13:58 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:13:58.920184    1497 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 20 21:14:00 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:14:00.230879    1497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-px884" podStartSLOduration=2.230857368 podStartE2EDuration="2.230857368s" podCreationTimestamp="2025-11-20 21:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:14:00.230352472 +0000 UTC m=+7.594650447" watchObservedRunningTime="2025-11-20 21:14:00.230857368 +0000 UTC m=+7.595154883"
	Nov 20 21:14:00 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:14:00.374458    1497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-jjjzp" podStartSLOduration=2.37441092 podStartE2EDuration="2.37441092s" podCreationTimestamp="2025-11-20 21:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:14:00.371674532 +0000 UTC m=+7.735972056" watchObservedRunningTime="2025-11-20 21:14:00.37441092 +0000 UTC m=+7.738708444"
	Nov 20 21:14:40 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:14:40.656760    1497 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 20 21:14:40 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:14:40.787434    1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fbb9a153-aade-496a-bfdc-2073f2f51065-tmp\") pod \"storage-provisioner\" (UID: \"fbb9a153-aade-496a-bfdc-2073f2f51065\") " pod="kube-system/storage-provisioner"
	Nov 20 21:14:40 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:14:40.787637    1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhck7\" (UniqueName: \"kubernetes.io/projected/fbb9a153-aade-496a-bfdc-2073f2f51065-kube-api-access-dhck7\") pod \"storage-provisioner\" (UID: \"fbb9a153-aade-496a-bfdc-2073f2f51065\") " pod="kube-system/storage-provisioner"
	Nov 20 21:14:40 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:14:40.888722    1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzvnz\" (UniqueName: \"kubernetes.io/projected/d31e708d-c6bd-4313-81c8-4094f93ca502-kube-api-access-xzvnz\") pod \"coredns-66bc5c9577-7976f\" (UID: \"d31e708d-c6bd-4313-81c8-4094f93ca502\") " pod="kube-system/coredns-66bc5c9577-7976f"
	Nov 20 21:14:40 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:14:40.888931    1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d31e708d-c6bd-4313-81c8-4094f93ca502-config-volume\") pod \"coredns-66bc5c9577-7976f\" (UID: \"d31e708d-c6bd-4313-81c8-4094f93ca502\") " pod="kube-system/coredns-66bc5c9577-7976f"
	Nov 20 21:14:42 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:14:42.277552    1497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7976f" podStartSLOduration=44.277533636 podStartE2EDuration="44.277533636s" podCreationTimestamp="2025-11-20 21:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:14:42.276970376 +0000 UTC m=+49.641267892" watchObservedRunningTime="2025-11-20 21:14:42.277533636 +0000 UTC m=+49.641831152"
	Nov 20 21:14:42 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:14:42.350515    1497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.350494683 podStartE2EDuration="42.350494683s" podCreationTimestamp="2025-11-20 21:14:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:14:42.327500849 +0000 UTC m=+49.691798365" watchObservedRunningTime="2025-11-20 21:14:42.350494683 +0000 UTC m=+49.714792215"
	Nov 20 21:14:44 default-k8s-diff-port-588348 kubelet[1497]: I1120 21:14:44.829673    1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sshwt\" (UniqueName: \"kubernetes.io/projected/53e62629-f375-4ed6-9baf-052a28f0f0fc-kube-api-access-sshwt\") pod \"busybox\" (UID: \"53e62629-f375-4ed6-9baf-052a28f0f0fc\") " pod="default/busybox"
	
	
	==> storage-provisioner [388690ebc3ca5f10870951db14bb6367b336590643da58189a0a2e1229b2140c] <==
	W1120 21:14:41.577531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 21:14:41.577908       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 21:14:41.580377       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-588348_38aa07ec-3023-417c-aff5-4238ef75ffad!
	I1120 21:14:41.585409       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b0f4bf38-dfd5-4644-b843-fc2a2b0abb71", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-588348_38aa07ec-3023-417c-aff5-4238ef75ffad became leader
	W1120 21:14:41.597625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:41.623404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 21:14:41.680964       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-588348_38aa07ec-3023-417c-aff5-4238ef75ffad!
	W1120 21:14:43.626776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:43.634151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:45.637127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:45.642174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:47.647025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:47.657863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:49.661491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:49.666395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:51.669387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:51.677147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:53.680928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:53.686508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:55.691272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:55.704705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:57.714661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:57.730780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:59.738149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:14:59.749401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-588348 -n default-k8s-diff-port-588348
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-588348 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (16.61s)

                                                
                                    

Test pass (299/333)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 8.25
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 9.47
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 156.73
29 TestAddons/serial/Volcano 39.96
31 TestAddons/serial/GCPAuth/Namespaces 0.2
32 TestAddons/serial/GCPAuth/FakeCredentials 10.01
35 TestAddons/parallel/Registry 15.86
36 TestAddons/parallel/RegistryCreds 0.79
37 TestAddons/parallel/Ingress 18.81
38 TestAddons/parallel/InspektorGadget 11.93
39 TestAddons/parallel/MetricsServer 5.86
41 TestAddons/parallel/CSI 33.17
42 TestAddons/parallel/Headlamp 16.08
43 TestAddons/parallel/CloudSpanner 5.72
44 TestAddons/parallel/LocalPath 51.3
45 TestAddons/parallel/NvidiaDevicePlugin 6.58
46 TestAddons/parallel/Yakd 10.84
48 TestAddons/StoppedEnableDisable 12.44
49 TestCertOptions 37.69
50 TestCertExpiration 232.73
52 TestForceSystemdFlag 39.33
53 TestForceSystemdEnv 43.11
54 TestDockerEnvContainerd 45.77
58 TestErrorSpam/setup 34.99
59 TestErrorSpam/start 0.79
60 TestErrorSpam/status 1.13
61 TestErrorSpam/pause 1.69
62 TestErrorSpam/unpause 1.75
63 TestErrorSpam/stop 1.66
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 80.41
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 7.21
70 TestFunctional/serial/KubeContext 0.07
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.51
75 TestFunctional/serial/CacheCmd/cache/add_local 1.22
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.87
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 46.16
84 TestFunctional/serial/ComponentHealth 0.11
85 TestFunctional/serial/LogsCmd 1.47
86 TestFunctional/serial/LogsFileCmd 1.55
87 TestFunctional/serial/InvalidService 4.39
89 TestFunctional/parallel/ConfigCmd 0.42
90 TestFunctional/parallel/DashboardCmd 7.01
91 TestFunctional/parallel/DryRun 0.64
92 TestFunctional/parallel/InternationalLanguage 0.28
93 TestFunctional/parallel/StatusCmd 1.44
97 TestFunctional/parallel/ServiceCmdConnect 6.62
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 25.81
101 TestFunctional/parallel/SSHCmd 1.11
102 TestFunctional/parallel/CpCmd 1.71
104 TestFunctional/parallel/FileSync 0.29
105 TestFunctional/parallel/CertSync 1.67
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.88
113 TestFunctional/parallel/License 0.33
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.66
116 TestFunctional/parallel/Version/short 0.1
117 TestFunctional/parallel/Version/components 1.54
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.44
121 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
122 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
123 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
124 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
125 TestFunctional/parallel/ImageCommands/ImageBuild 4.33
126 TestFunctional/parallel/ImageCommands/Setup 0.63
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.37
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.24
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.35
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.33
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.72
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
134 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
135 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.24
136 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.27
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.14
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
143 TestFunctional/parallel/MountCmd/any-port 8.49
144 TestFunctional/parallel/MountCmd/specific-port 2.03
145 TestFunctional/parallel/MountCmd/VerifyCleanup 2.53
146 TestFunctional/parallel/ServiceCmd/DeployApp 7.23
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.51
148 TestFunctional/parallel/ServiceCmd/List 0.58
149 TestFunctional/parallel/ProfileCmd/profile_list 0.55
150 TestFunctional/parallel/ServiceCmd/JSONOutput 0.62
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.58
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
153 TestFunctional/parallel/ServiceCmd/Format 0.54
154 TestFunctional/parallel/ServiceCmd/URL 0.51
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 205.2
163 TestMultiControlPlane/serial/DeployApp 7.37
164 TestMultiControlPlane/serial/PingHostFromPods 1.59
165 TestMultiControlPlane/serial/AddWorkerNode 30.05
166 TestMultiControlPlane/serial/NodeLabels 0.13
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.29
168 TestMultiControlPlane/serial/CopyFile 20.6
169 TestMultiControlPlane/serial/StopSecondaryNode 13.02
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.84
171 TestMultiControlPlane/serial/RestartSecondaryNode 13.93
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.22
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 106.13
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.08
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.8
176 TestMultiControlPlane/serial/StopCluster 36.47
177 TestMultiControlPlane/serial/RestartCluster 60.23
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.79
179 TestMultiControlPlane/serial/AddSecondaryNode 85.57
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.05
185 TestJSONOutput/start/Command 82.34
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.7
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.65
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.03
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 72.25
211 TestKicCustomNetwork/use_default_bridge_network 39.63
212 TestKicExistingNetwork 39.71
213 TestKicCustomSubnet 36.24
214 TestKicStaticIP 39.21
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 74.28
219 TestMountStart/serial/StartWithMountFirst 8.57
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 8.79
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.7
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 7.54
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 107.56
231 TestMultiNode/serial/DeployApp2Nodes 5.16
232 TestMultiNode/serial/PingHostFrom2Pods 1.02
233 TestMultiNode/serial/AddNode 28.04
234 TestMultiNode/serial/MultiNodeLabels 0.08
235 TestMultiNode/serial/ProfileList 0.72
236 TestMultiNode/serial/CopyFile 10.37
237 TestMultiNode/serial/StopNode 2.41
238 TestMultiNode/serial/StartAfterStop 8.15
239 TestMultiNode/serial/RestartKeepsNodes 76.75
240 TestMultiNode/serial/DeleteNode 5.81
241 TestMultiNode/serial/StopMultiNode 24.27
242 TestMultiNode/serial/RestartMultiNode 49.57
243 TestMultiNode/serial/ValidateNameConflict 40.75
248 TestPreload 133.78
250 TestScheduledStopUnix 110.61
253 TestInsufficientStorage 10.54
254 TestRunningBinaryUpgrade 70.62
256 TestKubernetesUpgrade 350.93
257 TestMissingContainerUpgrade 148.82
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 45.11
261 TestNoKubernetes/serial/StartWithStopK8s 20.65
262 TestNoKubernetes/serial/Start 10.32
263 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
265 TestNoKubernetes/serial/ProfileList 0.82
266 TestNoKubernetes/serial/Stop 1.29
267 TestNoKubernetes/serial/StartNoArgs 6.6
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
269 TestStoppedBinaryUpgrade/Setup 8.32
270 TestStoppedBinaryUpgrade/Upgrade 52.38
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.39
280 TestPause/serial/Start 83.4
281 TestPause/serial/SecondStartNoReconfiguration 6.94
282 TestPause/serial/Pause 0.74
283 TestPause/serial/VerifyStatus 0.33
284 TestPause/serial/Unpause 0.63
285 TestPause/serial/PauseAgain 1.07
286 TestPause/serial/DeletePaused 2.99
287 TestPause/serial/VerifyDeletedResources 0.53
295 TestNetworkPlugins/group/false 5.03
300 TestStartStop/group/old-k8s-version/serial/FirstStart 63.87
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.21
303 TestStartStop/group/old-k8s-version/serial/Stop 12.14
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
305 TestStartStop/group/old-k8s-version/serial/SecondStart 51.7
306 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.03
307 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
308 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.31
310 TestStartStop/group/no-preload/serial/FirstStart 78.82
311 TestStartStop/group/old-k8s-version/serial/Pause 4.57
313 TestStartStop/group/embed-certs/serial/FirstStart 89.44
315 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.1
316 TestStartStop/group/no-preload/serial/Stop 12.78
318 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.29
319 TestStartStop/group/no-preload/serial/SecondStart 50.12
320 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.44
321 TestStartStop/group/embed-certs/serial/Stop 12.47
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
323 TestStartStop/group/embed-certs/serial/SecondStart 51.54
324 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
325 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
326 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
327 TestStartStop/group/no-preload/serial/Pause 3.14
329 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 84.86
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.12
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
333 TestStartStop/group/embed-certs/serial/Pause 3.95
335 TestStartStop/group/newest-cni/serial/FirstStart 42.94
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.05
338 TestStartStop/group/newest-cni/serial/Stop 1.36
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
340 TestStartStop/group/newest-cni/serial/SecondStart 17.67
342 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.39
345 TestStartStop/group/newest-cni/serial/Pause 3.22
346 TestNetworkPlugins/group/auto/Start 81.52
347 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.45
348 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.61
349 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.36
350 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 53.94
351 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
352 TestNetworkPlugins/group/auto/KubeletFlags 0.31
353 TestNetworkPlugins/group/auto/NetCatPod 9.3
354 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
355 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
356 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.38
357 TestNetworkPlugins/group/auto/DNS 0.24
358 TestNetworkPlugins/group/auto/Localhost 0.21
359 TestNetworkPlugins/group/auto/HairPin 0.19
360 TestNetworkPlugins/group/kindnet/Start 92.24
361 TestNetworkPlugins/group/calico/Start 67.18
362 TestNetworkPlugins/group/calico/ControllerPod 6.02
363 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
364 TestNetworkPlugins/group/calico/KubeletFlags 0.52
365 TestNetworkPlugins/group/calico/NetCatPod 9.3
366 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
367 TestNetworkPlugins/group/kindnet/NetCatPod 10.36
368 TestNetworkPlugins/group/calico/DNS 0.27
369 TestNetworkPlugins/group/calico/Localhost 0.16
370 TestNetworkPlugins/group/calico/HairPin 0.15
371 TestNetworkPlugins/group/kindnet/DNS 0.24
372 TestNetworkPlugins/group/kindnet/Localhost 0.2
373 TestNetworkPlugins/group/kindnet/HairPin 0.23
374 TestNetworkPlugins/group/custom-flannel/Start 61.87
375 TestNetworkPlugins/group/enable-default-cni/Start 81.14
376 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
377 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.36
378 TestNetworkPlugins/group/custom-flannel/DNS 0.17
379 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
380 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
381 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
382 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.39
383 TestNetworkPlugins/group/flannel/Start 60.14
384 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
385 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
386 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
387 TestNetworkPlugins/group/bridge/Start 78.13
388 TestNetworkPlugins/group/flannel/ControllerPod 6.01
389 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
390 TestNetworkPlugins/group/flannel/NetCatPod 11.35
391 TestNetworkPlugins/group/flannel/DNS 0.17
392 TestNetworkPlugins/group/flannel/Localhost 0.15
393 TestNetworkPlugins/group/flannel/HairPin 0.16
394 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
395 TestNetworkPlugins/group/bridge/NetCatPod 8.29
396 TestNetworkPlugins/group/bridge/DNS 0.19
397 TestNetworkPlugins/group/bridge/Localhost 0.15
398 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.28.0/json-events (8.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-501066 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-501066 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.247734651s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (8.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1120 20:21:10.258060    4089 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1120 20:21:10.258141    4089 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-2300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-501066
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-501066: exit status 85 (89.801887ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-501066 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-501066 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 20:21:02
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 20:21:02.061345    4094 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:21:02.061456    4094 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:21:02.061462    4094 out.go:374] Setting ErrFile to fd 2...
	I1120 20:21:02.061466    4094 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:21:02.061723    4094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-2300/.minikube/bin
	W1120 20:21:02.061897    4094 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21923-2300/.minikube/config/config.json: open /home/jenkins/minikube-integration/21923-2300/.minikube/config/config.json: no such file or directory
	I1120 20:21:02.062296    4094 out.go:368] Setting JSON to true
	I1120 20:21:02.063198    4094 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":211,"bootTime":1763669851,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1120 20:21:02.063271    4094 start.go:143] virtualization:  
	I1120 20:21:02.065288    4094 out.go:99] [download-only-501066] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1120 20:21:02.065466    4094 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21923-2300/.minikube/cache/preloaded-tarball: no such file or directory
	I1120 20:21:02.065593    4094 notify.go:221] Checking for updates...
	I1120 20:21:02.066718    4094 out.go:171] MINIKUBE_LOCATION=21923
	I1120 20:21:02.067990    4094 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:21:02.069563    4094 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21923-2300/kubeconfig
	I1120 20:21:02.070931    4094 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-2300/.minikube
	I1120 20:21:02.072089    4094 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1120 20:21:02.074111    4094 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1120 20:21:02.074371    4094 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:21:02.096955    4094 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 20:21:02.097187    4094 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:21:02.504497    4094 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-20 20:21:02.494975012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 20:21:02.504604    4094 docker.go:319] overlay module found
	I1120 20:21:02.505920    4094 out.go:99] Using the docker driver based on user configuration
	I1120 20:21:02.505959    4094 start.go:309] selected driver: docker
	I1120 20:21:02.505966    4094 start.go:930] validating driver "docker" against <nil>
	I1120 20:21:02.506098    4094 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:21:02.575475    4094 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-20 20:21:02.56618794 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 20:21:02.575641    4094 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 20:21:02.575978    4094 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1120 20:21:02.576156    4094 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1120 20:21:02.577933    4094 out.go:171] Using Docker driver with root privileges
	I1120 20:21:02.579502    4094 cni.go:84] Creating CNI manager for ""
	I1120 20:21:02.579576    4094 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 20:21:02.579589    4094 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1120 20:21:02.579663    4094 start.go:353] cluster config:
	{Name:download-only-501066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-501066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:21:02.581221    4094 out.go:99] Starting "download-only-501066" primary control-plane node in "download-only-501066" cluster
	I1120 20:21:02.581248    4094 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1120 20:21:02.582657    4094 out.go:99] Pulling base image v0.0.48-1763507788-21924 ...
	I1120 20:21:02.582722    4094 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1120 20:21:02.582785    4094 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 20:21:02.597761    4094 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1120 20:21:02.597983    4094 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory
	I1120 20:21:02.598079    4094 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1120 20:21:02.742614    4094 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1120 20:21:02.742643    4094 cache.go:65] Caching tarball of preloaded images
	I1120 20:21:02.742795    4094 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1120 20:21:02.744418    4094 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1120 20:21:02.744454    4094 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1120 20:21:02.847350    4094 preload.go:295] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1120 20:21:02.847474    4094 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/21923-2300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-501066 host does not exist
	  To start a cluster, run: "minikube start -p download-only-501066"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-501066
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (9.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-844363 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-844363 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.471274822s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (9.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1120 20:21:20.182582    4089 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1120 20:21:20.182621    4089 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-2300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-844363
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-844363: exit status 85 (89.820411ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-501066 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-501066 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ delete  │ -p download-only-501066                                                                                                                                                               │ download-only-501066 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ start   │ -o=json --download-only -p download-only-844363 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-844363 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 20:21:10
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 20:21:10.751713    4294 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:21:10.751857    4294 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:21:10.751868    4294 out.go:374] Setting ErrFile to fd 2...
	I1120 20:21:10.751873    4294 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:21:10.752243    4294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-2300/.minikube/bin
	I1120 20:21:10.753045    4294 out.go:368] Setting JSON to true
	I1120 20:21:10.753883    4294 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":220,"bootTime":1763669851,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1120 20:21:10.753990    4294 start.go:143] virtualization:  
	I1120 20:21:10.757492    4294 out.go:99] [download-only-844363] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 20:21:10.757748    4294 notify.go:221] Checking for updates...
	I1120 20:21:10.760638    4294 out.go:171] MINIKUBE_LOCATION=21923
	I1120 20:21:10.763679    4294 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:21:10.766712    4294 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21923-2300/kubeconfig
	I1120 20:21:10.769526    4294 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-2300/.minikube
	I1120 20:21:10.772625    4294 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1120 20:21:10.778452    4294 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1120 20:21:10.778732    4294 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:21:10.805803    4294 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 20:21:10.805913    4294 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:21:10.865287    4294 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:48 SystemTime:2025-11-20 20:21:10.856160096 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 20:21:10.865409    4294 docker.go:319] overlay module found
	I1120 20:21:10.868347    4294 out.go:99] Using the docker driver based on user configuration
	I1120 20:21:10.868385    4294 start.go:309] selected driver: docker
	I1120 20:21:10.868392    4294 start.go:930] validating driver "docker" against <nil>
	I1120 20:21:10.868501    4294 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:21:10.923976    4294 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:48 SystemTime:2025-11-20 20:21:10.914934662 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 20:21:10.924135    4294 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 20:21:10.924408    4294 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1120 20:21:10.924578    4294 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1120 20:21:10.927663    4294 out.go:171] Using Docker driver with root privileges
	I1120 20:21:10.930491    4294 cni.go:84] Creating CNI manager for ""
	I1120 20:21:10.930556    4294 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1120 20:21:10.930571    4294 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1120 20:21:10.930652    4294 start.go:353] cluster config:
	{Name:download-only-844363 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-844363 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:21:10.933624    4294 out.go:99] Starting "download-only-844363" primary control-plane node in "download-only-844363" cluster
	I1120 20:21:10.933667    4294 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1120 20:21:10.936683    4294 out.go:99] Pulling base image v0.0.48-1763507788-21924 ...
	I1120 20:21:10.936740    4294 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1120 20:21:10.936961    4294 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 20:21:10.953447    4294 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1120 20:21:10.953573    4294 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory
	I1120 20:21:10.953599    4294 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory, skipping pull
	I1120 20:21:10.953606    4294 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in cache, skipping pull
	I1120 20:21:10.953618    4294 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a as a tarball
	I1120 20:21:10.994994    4294 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1120 20:21:10.995023    4294 cache.go:65] Caching tarball of preloaded images
	I1120 20:21:10.995199    4294 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1120 20:21:10.998178    4294 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1120 20:21:10.998216    4294 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1120 20:21:11.109103    4294 preload.go:295] Got checksum from GCS API "435977642a202d20ca04f26d87d875a8"
	I1120 20:21:11.109179    4294 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:435977642a202d20ca04f26d87d875a8 -> /home/jenkins/minikube-integration/21923-2300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-844363 host does not exist
	  To start a cluster, run: "minikube start -p download-only-844363"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-844363
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1120 20:21:21.315105    4089 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-648948 --alsologtostderr --binary-mirror http://127.0.0.1:46529 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-648948" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-648948
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-657501
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-657501: exit status 85 (73.236012ms)

                                                
                                                
-- stdout --
	* Profile "addons-657501" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-657501"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-657501
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-657501: exit status 85 (69.08037ms)

                                                
                                                
-- stdout --
	* Profile "addons-657501" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-657501"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (156.73s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-657501 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-657501 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m36.72312579s)
--- PASS: TestAddons/Setup (156.73s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.96s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 59.164643ms
addons_test.go:876: volcano-admission stabilized in 59.891602ms
addons_test.go:868: volcano-scheduler stabilized in 60.131615ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-whjxr" [252098df-0eed-4da2-968a-89bacb8ec723] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003199335s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-zs9h7" [e8662ec8-4f94-4c79-9e2a-e916b546a62a] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003739402s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-ndr7w" [ac5f0a70-3557-46fb-93ad-c92795b3371d] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004066802s
addons_test.go:903: (dbg) Run:  kubectl --context addons-657501 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-657501 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-657501 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [f8100c05-7eec-46e6-ab1f-580b2ef325bf] Pending
helpers_test.go:352: "test-job-nginx-0" [f8100c05-7eec-46e6-ab1f-580b2ef325bf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [f8100c05-7eec-46e6-ab1f-580b2ef325bf] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 11.011463288s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-657501 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-657501 addons disable volcano --alsologtostderr -v=1: (12.198669878s)
--- PASS: TestAddons/serial/Volcano (39.96s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-657501 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-657501 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-657501 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-657501 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [94f3ff53-6e0d-4caf-a548-4e82ce491ad7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [94f3ff53-6e0d-4caf-a548-4e82ce491ad7] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004267309s
addons_test.go:694: (dbg) Run:  kubectl --context addons-657501 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-657501 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-657501 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-657501 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.01s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 5.257696ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-95pwx" [d9799dd8-c9a3-4204-b4fd-523a56d00232] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003772992s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-j5qdb" [b386e2d2-6106-4999-bc9f-8a9da1415c6c] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002917475s
addons_test.go:392: (dbg) Run:  kubectl --context addons-657501 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-657501 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-657501 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.849333942s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-657501 ip
2025/11/20 20:25:13 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-657501 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.86s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.79s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.886754ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-657501
addons_test.go:332: (dbg) Run:  kubectl --context addons-657501 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-657501 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.79s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-657501 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-657501 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-657501 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [31e7aeec-1fec-414b-9692-27ec4d262bea] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [31e7aeec-1fec-414b-9692-27ec4d262bea] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003439463s
I1120 20:26:18.163561    4089 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-657501 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-657501 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-657501 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-657501 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-657501 addons disable ingress-dns --alsologtostderr -v=1: (1.135043134s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-657501 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-657501 addons disable ingress --alsologtostderr -v=1: (7.813525702s)
--- PASS: TestAddons/parallel/Ingress (18.81s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.93s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-bvxbt" [75c6f94f-5b7f-4277-ab48-54fa896b2c9b] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003833482s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-657501 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-657501 addons disable inspektor-gadget --alsologtostderr -v=1: (5.929420773s)
--- PASS: TestAddons/parallel/InspektorGadget (11.93s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.86s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 9.13607ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-7xtcs" [8758154c-3311-4f32-a57c-0075448ed5be] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003744309s
addons_test.go:463: (dbg) Run:  kubectl --context addons-657501 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-657501 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.86s)

                                                
                                    
x
+
TestAddons/parallel/CSI (33.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1120 20:25:36.404903    4089 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1120 20:25:36.408713    4089 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1120 20:25:36.408747    4089 kapi.go:107] duration metric: took 6.928996ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.939196ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-657501 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-657501 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-657501 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-657501 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [d2700c99-02a2-4b36-8f21-e41fe447afc9] Pending
helpers_test.go:352: "task-pv-pod" [d2700c99-02a2-4b36-8f21-e41fe447afc9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [d2700c99-02a2-4b36-8f21-e41fe447afc9] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003287808s
addons_test.go:572: (dbg) Run:  kubectl --context addons-657501 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-657501 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-657501 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-657501 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-657501 delete pod task-pv-pod: (1.142136098s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-657501 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-657501 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-657501 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-657501 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-657501 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-657501 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-657501 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-657501 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [a7b2b1cd-18c3-47b0-8c4d-6b78140b536b] Pending
helpers_test.go:352: "task-pv-pod-restore" [a7b2b1cd-18c3-47b0-8c4d-6b78140b536b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [a7b2b1cd-18c3-47b0-8c4d-6b78140b536b] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00415221s
addons_test.go:614: (dbg) Run:  kubectl --context addons-657501 delete pod task-pv-pod-restore
addons_test.go:614: (dbg) Done: kubectl --context addons-657501 delete pod task-pv-pod-restore: (1.479068172s)
addons_test.go:618: (dbg) Run:  kubectl --context addons-657501 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-657501 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-657501 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-657501 addons disable volumesnapshots --alsologtostderr -v=1: (1.037726798s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-657501 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-657501 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.849495243s)
--- PASS: TestAddons/parallel/CSI (33.17s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-657501 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-657501 --alsologtostderr -v=1: (1.025448306s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-tl6k4" [be7ec26b-f779-4002-be32-6dc97cad0761] Pending
helpers_test.go:352: "headlamp-6945c6f4d-tl6k4" [be7ec26b-f779-4002-be32-6dc97cad0761] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-tl6k4" [be7ec26b-f779-4002-be32-6dc97cad0761] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.003521187s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-657501 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-657501 addons disable headlamp --alsologtostderr -v=1: (6.047527286s)
--- PASS: TestAddons/parallel/Headlamp (16.08s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-xk2sf" [682159e0-c39e-4673-8eb1-8e9195f894d1] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003506128s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-657501 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.72s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.3s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-657501 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-657501 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-657501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-657501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-657501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-657501 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-657501 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [476ba1b0-bd3f-42a7-b886-a10a2a3028f7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [476ba1b0-bd3f-42a7-b886-a10a2a3028f7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [476ba1b0-bd3f-42a7-b886-a10a2a3028f7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003077942s
addons_test.go:967: (dbg) Run:  kubectl --context addons-657501 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-657501 ssh "cat /opt/local-path-provisioner/pvc-05521924-f74b-4cc0-9a47-bdefb0b7b1c0_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-657501 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-657501 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-657501 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-657501 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.109435228s)
--- PASS: TestAddons/parallel/LocalPath (51.30s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-ntkp4" [4536c519-7dbf-4897-b569-753e87aaa79e] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003827774s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-657501 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.58s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-bvfdg" [54cd5ce2-3421-496a-b1eb-a05966cb4f86] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00655401s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-657501 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-657501 addons disable yakd --alsologtostderr -v=1: (5.829672531s)
--- PASS: TestAddons/parallel/Yakd (10.84s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.44s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-657501
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-657501: (12.169508133s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-657501
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-657501
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-657501
--- PASS: TestAddons/StoppedEnableDisable (12.44s)

                                                
                                    
x
+
TestCertOptions (37.69s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-530158 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-530158 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (34.417395876s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-530158 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-530158 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-530158 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-530158" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-530158
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-530158: (2.535182491s)
--- PASS: TestCertOptions (37.69s)

                                                
                                    
x
+
TestCertExpiration (232.73s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-339813 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-339813 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (42.465859873s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-339813 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-339813 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.746644561s)
helpers_test.go:175: Cleaning up "cert-expiration-339813" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-339813
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-339813: (2.516400406s)
--- PASS: TestCertExpiration (232.73s)

                                                
                                    
x
+
TestForceSystemdFlag (39.33s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-795595 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-795595 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (36.060208941s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-795595 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-795595" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-795595
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-795595: (2.857039573s)
--- PASS: TestForceSystemdFlag (39.33s)

                                                
                                    
x
+
TestForceSystemdEnv (43.11s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-444240 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-444240 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.854693675s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-444240 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-444240" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-444240
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-444240: (2.856318263s)
--- PASS: TestForceSystemdEnv (43.11s)

                                                
                                    
x
+
TestDockerEnvContainerd (45.77s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-117799 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-117799 --driver=docker  --container-runtime=containerd: (29.802999903s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-117799"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-117799": (1.069819376s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-dshm1zBupEcb/agent.23572" SSH_AGENT_PID="23573" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-dshm1zBupEcb/agent.23572" SSH_AGENT_PID="23573" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-dshm1zBupEcb/agent.23572" SSH_AGENT_PID="23573" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.243124019s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-dshm1zBupEcb/agent.23572" SSH_AGENT_PID="23573" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-117799" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-117799
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-117799: (2.11625656s)
--- PASS: TestDockerEnvContainerd (45.77s)

                                                
                                    
x
+
TestErrorSpam/setup (34.99s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-235173 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-235173 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-235173 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-235173 --driver=docker  --container-runtime=containerd: (34.98560803s)
--- PASS: TestErrorSpam/setup (34.99s)

                                                
                                    
x
+
TestErrorSpam/start (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-235173 --log_dir /tmp/nospam-235173 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-235173 --log_dir /tmp/nospam-235173 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-235173 --log_dir /tmp/nospam-235173 start --dry-run
--- PASS: TestErrorSpam/start (0.79s)

                                                
                                    
x
+
TestErrorSpam/status (1.13s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-235173 --log_dir /tmp/nospam-235173 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-235173 --log_dir /tmp/nospam-235173 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-235173 --log_dir /tmp/nospam-235173 status
--- PASS: TestErrorSpam/status (1.13s)

                                                
                                    
x
+
TestErrorSpam/pause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-235173 --log_dir /tmp/nospam-235173 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-235173 --log_dir /tmp/nospam-235173 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-235173 --log_dir /tmp/nospam-235173 pause
--- PASS: TestErrorSpam/pause (1.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-235173 --log_dir /tmp/nospam-235173 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-235173 --log_dir /tmp/nospam-235173 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-235173 --log_dir /tmp/nospam-235173 unpause
--- PASS: TestErrorSpam/unpause (1.75s)

                                                
                                    
x
+
TestErrorSpam/stop (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-235173 --log_dir /tmp/nospam-235173 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-235173 --log_dir /tmp/nospam-235173 stop: (1.465802078s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-235173 --log_dir /tmp/nospam-235173 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-235173 --log_dir /tmp/nospam-235173 stop
--- PASS: TestErrorSpam/stop (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21923-2300/.minikube/files/etc/test/nested/copy/4089/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.41s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-365934 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1120 20:28:58.761193    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/addons-657501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:28:58.768348    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/addons-657501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:28:58.779841    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/addons-657501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:28:58.801242    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/addons-657501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:28:58.842774    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/addons-657501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:28:58.924238    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/addons-657501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:28:59.085783    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/addons-657501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:28:59.407557    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/addons-657501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:29:00.051267    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/addons-657501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:29:01.332631    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/addons-657501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:29:03.894557    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/addons-657501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:29:09.016010    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/addons-657501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:29:19.257599    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/addons-657501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-365934 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m20.408636398s)
--- PASS: TestFunctional/serial/StartWithProxy (80.41s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.21s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1120 20:29:39.261840    4089 config.go:182] Loaded profile config "functional-365934": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-365934 --alsologtostderr -v=8
E1120 20:29:39.739328    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/addons-657501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-365934 --alsologtostderr -v=8: (7.195925098s)
functional_test.go:678: soft start took 7.207248627s for "functional-365934" cluster.
I1120 20:29:46.465280    4089 config.go:182] Loaded profile config "functional-365934": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (7.21s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-365934 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-365934 cache add registry.k8s.io/pause:3.1: (1.339548772s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-365934 cache add registry.k8s.io/pause:3.3: (1.156571043s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-365934 cache add registry.k8s.io/pause:latest: (1.012960081s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-365934 /tmp/TestFunctionalserialCacheCmdcacheadd_local3846090335/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 cache add minikube-local-cache-test:functional-365934
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 cache delete minikube-local-cache-test:functional-365934
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-365934
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-365934 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (296.332463ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 kubectl -- --context functional-365934 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-365934 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (46.16s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-365934 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1120 20:30:20.701036    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/addons-657501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-365934 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.160650281s)
functional_test.go:776: restart took 46.160758427s for "functional-365934" cluster.
I1120 20:30:40.185202    4089 config.go:182] Loaded profile config "functional-365934": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (46.16s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-365934 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-365934 logs: (1.465754437s)
--- PASS: TestFunctional/serial/LogsCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 logs --file /tmp/TestFunctionalserialLogsFileCmd301796038/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-365934 logs --file /tmp/TestFunctionalserialLogsFileCmd301796038/001/logs.txt: (1.546831211s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.39s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-365934 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-365934
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-365934: exit status 115 (456.716607ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30991 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-365934 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.39s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-365934 config get cpus: exit status 14 (51.61446ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-365934 config get cpus: exit status 14 (103.899085ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-365934 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-365934 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 41110: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.01s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-365934 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-365934 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (261.588831ms)

                                                
                                                
-- stdout --
	* [functional-365934] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-2300/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-2300/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:31:27.928044   40315 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:31:27.928172   40315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:31:27.928184   40315 out.go:374] Setting ErrFile to fd 2...
	I1120 20:31:27.928189   40315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:31:27.928451   40315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-2300/.minikube/bin
	I1120 20:31:27.928827   40315 out.go:368] Setting JSON to false
	I1120 20:31:27.929768   40315 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":837,"bootTime":1763669851,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1120 20:31:27.929835   40315 start.go:143] virtualization:  
	I1120 20:31:27.933344   40315 out.go:179] * [functional-365934] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 20:31:27.937138   40315 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:31:27.937218   40315 notify.go:221] Checking for updates...
	I1120 20:31:27.944011   40315 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:31:27.946961   40315 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-2300/kubeconfig
	I1120 20:31:27.949910   40315 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-2300/.minikube
	I1120 20:31:27.952835   40315 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 20:31:27.955913   40315 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:31:27.960855   40315 config.go:182] Loaded profile config "functional-365934": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:31:27.961470   40315 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:31:28.003219   40315 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 20:31:28.003483   40315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:31:28.108331   40315 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-20 20:31:28.095605222 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 20:31:28.108427   40315 docker.go:319] overlay module found
	I1120 20:31:28.111632   40315 out.go:179] * Using the docker driver based on existing profile
	I1120 20:31:28.114575   40315 start.go:309] selected driver: docker
	I1120 20:31:28.114595   40315 start.go:930] validating driver "docker" against &{Name:functional-365934 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-365934 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:31:28.114697   40315 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:31:28.118279   40315 out.go:203] 
	W1120 20:31:28.121160   40315 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1120 20:31:28.123994   40315 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-365934 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-365934 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-365934 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (280.643883ms)

                                                
                                                
-- stdout --
	* [functional-365934] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-2300/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-2300/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:31:29.270179   40710 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:31:29.270328   40710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:31:29.270337   40710 out.go:374] Setting ErrFile to fd 2...
	I1120 20:31:29.270342   40710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:31:29.270873   40710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-2300/.minikube/bin
	I1120 20:31:29.271259   40710 out.go:368] Setting JSON to false
	I1120 20:31:29.273039   40710 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":839,"bootTime":1763669851,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1120 20:31:29.273109   40710 start.go:143] virtualization:  
	I1120 20:31:29.276655   40710 out.go:179] * [functional-365934] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1120 20:31:29.279767   40710 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:31:29.279839   40710 notify.go:221] Checking for updates...
	I1120 20:31:29.290852   40710 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:31:29.293960   40710 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-2300/kubeconfig
	I1120 20:31:29.296873   40710 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-2300/.minikube
	I1120 20:31:29.299693   40710 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 20:31:29.302573   40710 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:31:29.305909   40710 config.go:182] Loaded profile config "functional-365934": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:31:29.306502   40710 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:31:29.341352   40710 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 20:31:29.341453   40710 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:31:29.460114   40710 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-20 20:31:29.450580242 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 20:31:29.460216   40710 docker.go:319] overlay module found
	I1120 20:31:29.463484   40710 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1120 20:31:29.466376   40710 start.go:309] selected driver: docker
	I1120 20:31:29.466397   40710 start.go:930] validating driver "docker" against &{Name:functional-365934 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-365934 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:31:29.466536   40710 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:31:29.470216   40710 out.go:203] 
	W1120 20:31:29.473181   40710 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1120 20:31:29.475993   40710 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-365934 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-365934 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-shgk9" [5755991a-c179-4d99-be6d-d8f07ff3daa0] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-shgk9" [5755991a-c179-4d99-be6d-d8f07ff3daa0] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.003420717s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32092
functional_test.go:1680: http://192.168.49.2:32092: success! body:
Request served by hello-node-connect-7d85dfc575-shgk9

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32092
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [8112600a-8157-4bd1-8df6-d6d1fdbe19d2] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003654487s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-365934 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-365934 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-365934 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-365934 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [26688a04-05fb-40d8-b9d0-970d3a34cc37] Pending
helpers_test.go:352: "sp-pod" [26688a04-05fb-40d8-b9d0-970d3a34cc37] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [26688a04-05fb-40d8-b9d0-970d3a34cc37] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.00282982s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-365934 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-365934 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-365934 delete -f testdata/storage-provisioner/pod.yaml: (1.514993216s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-365934 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [4dfb6a35-9e67-4f5a-a7ac-1c2c58ad3196] Pending
helpers_test.go:352: "sp-pod" [4dfb6a35-9e67-4f5a-a7ac-1c2c58ad3196] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004340926s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-365934 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.81s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh -n functional-365934 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 cp functional-365934:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd381804440/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh -n functional-365934 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh -n functional-365934 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4089/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh "sudo cat /etc/test/nested/copy/4089/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4089.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh "sudo cat /etc/ssl/certs/4089.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4089.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh "sudo cat /usr/share/ca-certificates/4089.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/40892.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh "sudo cat /etc/ssl/certs/40892.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/40892.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh "sudo cat /usr/share/ca-certificates/40892.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-365934 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-365934 ssh "sudo systemctl is-active docker": exit status 1 (486.990378ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-365934 ssh "sudo systemctl is-active crio": exit status 1 (394.894669ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-365934 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-365934 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-365934 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-365934 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 35420: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-365934 version -o=json --components: (1.542603159s)
--- PASS: TestFunctional/parallel/Version/components (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-365934 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-365934 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [f0e9cbaa-08dc-4326-bd33-c8766f5ae5c4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [f0e9cbaa-08dc-4326-bd33-c8766f5ae5c4] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004063501s
I1120 20:30:59.138275    4089 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-365934 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-365934
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-365934
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-365934 image ls --format short --alsologtostderr:
I1120 20:31:31.671353   41272 out.go:360] Setting OutFile to fd 1 ...
I1120 20:31:31.671488   41272 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:31:31.671498   41272 out.go:374] Setting ErrFile to fd 2...
I1120 20:31:31.671504   41272 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:31:31.671747   41272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-2300/.minikube/bin
I1120 20:31:31.672364   41272 config.go:182] Loaded profile config "functional-365934": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1120 20:31:31.672480   41272 config.go:182] Loaded profile config "functional-365934": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1120 20:31:31.672966   41272 cli_runner.go:164] Run: docker container inspect functional-365934 --format={{.State.Status}}
I1120 20:31:31.691941   41272 ssh_runner.go:195] Run: systemctl --version
I1120 20:31:31.692009   41272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-365934
I1120 20:31:31.709339   41272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/functional-365934/id_rsa Username:docker}
I1120 20:31:31.813374   41272 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-365934 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:7eb2c6 │ 20.7MB │
│ localhost/my-image                          │ functional-365934  │ sha256:d9870a │ 831kB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:43911e │ 24.6MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ docker.io/library/minikube-local-cache-test │ functional-365934  │ sha256:c7ba0c │ 988B   │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:b5f57e │ 15.8MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ docker.io/kicbase/echo-server               │ functional-365934  │ sha256:ce2d2c │ 2.17MB │
│ docker.io/kicbase/echo-server               │ latest             │ sha256:ce2d2c │ 2.17MB │
│ docker.io/library/nginx                     │ alpine             │ sha256:cbad63 │ 23.1MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:05baa9 │ 22.8MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ docker.io/library/nginx                     │ latest             │ sha256:bb747c │ 58.3MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:a18947 │ 98.2MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-365934 image ls --format table --alsologtostderr:
I1120 20:31:36.552181   41708 out.go:360] Setting OutFile to fd 1 ...
I1120 20:31:36.552348   41708 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:31:36.552369   41708 out.go:374] Setting ErrFile to fd 2...
I1120 20:31:36.552388   41708 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:31:36.552857   41708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-2300/.minikube/bin
I1120 20:31:36.553510   41708 config.go:182] Loaded profile config "functional-365934": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1120 20:31:36.553666   41708 config.go:182] Loaded profile config "functional-365934": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1120 20:31:36.554143   41708 cli_runner.go:164] Run: docker container inspect functional-365934 --format={{.State.Status}}
I1120 20:31:36.571710   41708 ssh_runner.go:195] Run: systemctl --version
I1120 20:31:36.571764   41708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-365934
I1120 20:31:36.589332   41708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/functional-365934/id_rsa Username:docker}
I1120 20:31:36.693391   41708 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-365934 image ls --format json --alsologtostderr:
[{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:c7ba0c8b1f3ea7bc6ea926f5f9f4ccb23488da1b1961b3aafc2f40eef5c87405","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-365934"],"size":"988"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:d9870aa40b5f05e3d627445941cbe0853b8d78f0de23b8d08207b8874ec6a713","repoDigests":[],"repoTags":["localhost/my-image:functional-365934"],"size":"830618"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac3
8a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6"],"repoTags":["docker.io/kicbase/echo-server:functional-365934","docker.io/kicbase/echo-server:latest"],"size":"2173567"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"98207481"},{"id":"sha256:43911e833d64d4f3046086
2fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"24571109"},{"id":"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"22788047"},{"id":"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"15779817"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigest
s":["docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"23117513"},{"id":"sha256:bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42"],"repoTags":["docker.io/library/nginx:latest"],"size":"58263548"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3
a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"20720058"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-365934 image ls --format json --alsologtostderr:
I1120 20:31:36.461899   41689 out.go:360] Setting OutFile to fd 1 ...
I1120 20:31:36.462057   41689 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:31:36.462078   41689 out.go:374] Setting ErrFile to fd 2...
I1120 20:31:36.462098   41689 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:31:36.462403   41689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-2300/.minikube/bin
I1120 20:31:36.463109   41689 config.go:182] Loaded profile config "functional-365934": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1120 20:31:36.463274   41689 config.go:182] Loaded profile config "functional-365934": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1120 20:31:36.463861   41689 cli_runner.go:164] Run: docker container inspect functional-365934 --format={{.State.Status}}
I1120 20:31:36.481564   41689 ssh_runner.go:195] Run: systemctl --version
I1120 20:31:36.481619   41689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-365934
I1120 20:31:36.500131   41689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/functional-365934/id_rsa Username:docker}
I1120 20:31:36.601579   41689 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-365934 image ls --format yaml --alsologtostderr:
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:c7ba0c8b1f3ea7bc6ea926f5f9f4ccb23488da1b1961b3aafc2f40eef5c87405
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-365934
size: "988"
- id: sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "98207481"
- id: sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "24571109"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "22788047"
- id: sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "15779817"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "23117513"
- id: sha256:bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
repoTags:
- docker.io/library/nginx:latest
size: "58263548"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "20720058"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
repoTags:
- docker.io/kicbase/echo-server:functional-365934
- docker.io/kicbase/echo-server:latest
size: "2173567"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-365934 image ls --format yaml --alsologtostderr:
I1120 20:31:31.908805   41314 out.go:360] Setting OutFile to fd 1 ...
I1120 20:31:31.909785   41314 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:31:31.909813   41314 out.go:374] Setting ErrFile to fd 2...
I1120 20:31:31.909818   41314 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:31:31.910403   41314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-2300/.minikube/bin
I1120 20:31:31.911147   41314 config.go:182] Loaded profile config "functional-365934": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1120 20:31:31.911313   41314 config.go:182] Loaded profile config "functional-365934": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1120 20:31:31.911820   41314 cli_runner.go:164] Run: docker container inspect functional-365934 --format={{.State.Status}}
I1120 20:31:31.929654   41314 ssh_runner.go:195] Run: systemctl --version
I1120 20:31:31.929711   41314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-365934
I1120 20:31:31.947645   41314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/functional-365934/id_rsa Username:docker}
I1120 20:31:32.049452   41314 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-365934 ssh pgrep buildkitd: exit status 1 (334.720687ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 image build -t localhost/my-image:functional-365934 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-365934 image build -t localhost/my-image:functional-365934 testdata/build --alsologtostderr: (3.73071057s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-365934 image build -t localhost/my-image:functional-365934 testdata/build --alsologtostderr:
I1120 20:31:32.516143   41415 out.go:360] Setting OutFile to fd 1 ...
I1120 20:31:32.516394   41415 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:31:32.516418   41415 out.go:374] Setting ErrFile to fd 2...
I1120 20:31:32.516436   41415 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:31:32.516736   41415 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-2300/.minikube/bin
I1120 20:31:32.517411   41415 config.go:182] Loaded profile config "functional-365934": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1120 20:31:32.519994   41415 config.go:182] Loaded profile config "functional-365934": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1120 20:31:32.520567   41415 cli_runner.go:164] Run: docker container inspect functional-365934 --format={{.State.Status}}
I1120 20:31:32.541329   41415 ssh_runner.go:195] Run: systemctl --version
I1120 20:31:32.541381   41415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-365934
I1120 20:31:32.559476   41415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/functional-365934/id_rsa Username:docker}
I1120 20:31:32.661534   41415 build_images.go:162] Building image from path: /tmp/build.3545905782.tar
I1120 20:31:32.661603   41415 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1120 20:31:32.676478   41415 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3545905782.tar
I1120 20:31:32.681249   41415 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3545905782.tar: stat -c "%s %y" /var/lib/minikube/build/build.3545905782.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3545905782.tar': No such file or directory
I1120 20:31:32.681284   41415 ssh_runner.go:362] scp /tmp/build.3545905782.tar --> /var/lib/minikube/build/build.3545905782.tar (3072 bytes)
I1120 20:31:32.708344   41415 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3545905782
I1120 20:31:32.720177   41415 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3545905782 -xf /var/lib/minikube/build/build.3545905782.tar
I1120 20:31:32.734556   41415 containerd.go:394] Building image: /var/lib/minikube/build/build.3545905782
I1120 20:31:32.734662   41415 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3545905782 --local dockerfile=/var/lib/minikube/build/build.3545905782 --output type=image,name=localhost/my-image:functional-365934
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:29347086a2c3a697fe4a5a89106f1a95cdb609e854770b24f4569ff9015558cd
#8 exporting manifest sha256:29347086a2c3a697fe4a5a89106f1a95cdb609e854770b24f4569ff9015558cd 0.0s done
#8 exporting config sha256:d9870aa40b5f05e3d627445941cbe0853b8d78f0de23b8d08207b8874ec6a713 0.0s done
#8 naming to localhost/my-image:functional-365934 done
#8 DONE 0.2s
I1120 20:31:36.125593   41415 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3545905782 --local dockerfile=/var/lib/minikube/build/build.3545905782 --output type=image,name=localhost/my-image:functional-365934: (3.390901482s)
I1120 20:31:36.125655   41415 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3545905782
I1120 20:31:36.134240   41415 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3545905782.tar
I1120 20:31:36.146140   41415 build_images.go:218] Built localhost/my-image:functional-365934 from /tmp/build.3545905782.tar
I1120 20:31:36.146174   41415 build_images.go:134] succeeded building to: functional-365934
I1120 20:31:36.146180   41415 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 image ls
2025/11/20 20:31:36 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-365934
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 image load --daemon kicbase/echo-server:functional-365934 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-365934 image load --daemon kicbase/echo-server:functional-365934 --alsologtostderr: (1.113306021s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 image load --daemon kicbase/echo-server:functional-365934 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-365934 image load --daemon kicbase/echo-server:functional-365934 --alsologtostderr: (1.008194586s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-365934
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 image load --daemon kicbase/echo-server:functional-365934 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 image save kicbase/echo-server:functional-365934 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 image rm kicbase/echo-server:functional-365934 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-365934
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 image save --daemon kicbase/echo-server:functional-365934 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-365934
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-365934 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.235.39 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-365934 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-365934 /tmp/TestFunctionalparallelMountCmdany-port2661912755/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763670659383692454" to /tmp/TestFunctionalparallelMountCmdany-port2661912755/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763670659383692454" to /tmp/TestFunctionalparallelMountCmdany-port2661912755/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763670659383692454" to /tmp/TestFunctionalparallelMountCmdany-port2661912755/001/test-1763670659383692454
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-365934 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (538.284278ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1120 20:30:59.923079    4089 retry.go:31] will retry after 280.105993ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 20 20:30 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 20 20:30 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 20 20:30 test-1763670659383692454
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh cat /mount-9p/test-1763670659383692454
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-365934 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [98dd2c85-0c90-4475-be00-960439a88d0d] Pending
helpers_test.go:352: "busybox-mount" [98dd2c85-0c90-4475-be00-960439a88d0d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [98dd2c85-0c90-4475-be00-960439a88d0d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [98dd2c85-0c90-4475-be00-960439a88d0d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003103016s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-365934 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-365934 /tmp/TestFunctionalparallelMountCmdany-port2661912755/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-365934 /tmp/TestFunctionalparallelMountCmdspecific-port3976981618/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-365934 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (363.306528ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1120 20:31:08.240946    4089 retry.go:31] will retry after 345.729255ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-365934 /tmp/TestFunctionalparallelMountCmdspecific-port3976981618/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-365934 ssh "sudo umount -f /mount-9p": exit status 1 (344.969224ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-365934 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-365934 /tmp/TestFunctionalparallelMountCmdspecific-port3976981618/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-365934 /tmp/TestFunctionalparallelMountCmdVerifyCleanup489544843/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-365934 /tmp/TestFunctionalparallelMountCmdVerifyCleanup489544843/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-365934 /tmp/TestFunctionalparallelMountCmdVerifyCleanup489544843/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-365934 ssh "findmnt -T" /mount1: exit status 1 (910.010343ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1120 20:31:10.819426    4089 retry.go:31] will retry after 723.941752ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-365934 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-365934 /tmp/TestFunctionalparallelMountCmdVerifyCleanup489544843/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-365934 /tmp/TestFunctionalparallelMountCmdVerifyCleanup489544843/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-365934 /tmp/TestFunctionalparallelMountCmdVerifyCleanup489544843/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-365934 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-365934 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-9rkgh" [09deab0e-7af9-4365-bfbf-3703d7a80d5a] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-9rkgh" [09deab0e-7af9-4365-bfbf-3703d7a80d5a] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.00510175s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "457.127912ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "88.815166ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 service list -o json
functional_test.go:1504: Took "621.440944ms" to run "out/minikube-linux-arm64 -p functional-365934 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "486.355053ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "96.486861ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30181
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-365934 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30181
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-365934
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-365934
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-365934
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (205.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1120 20:31:42.622726    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/addons-657501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:33:58.759896    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/addons-657501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:34:26.464747    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/addons-657501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-351359 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (3m24.30404244s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (205.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-351359 kubectl -- rollout status deployment/busybox: (4.277354627s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 kubectl -- exec busybox-7b57f96db7-67gp2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 kubectl -- exec busybox-7b57f96db7-hxsnc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 kubectl -- exec busybox-7b57f96db7-q2g8z -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 kubectl -- exec busybox-7b57f96db7-67gp2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 kubectl -- exec busybox-7b57f96db7-hxsnc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 kubectl -- exec busybox-7b57f96db7-q2g8z -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 kubectl -- exec busybox-7b57f96db7-67gp2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 kubectl -- exec busybox-7b57f96db7-hxsnc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 kubectl -- exec busybox-7b57f96db7-q2g8z -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 kubectl -- exec busybox-7b57f96db7-67gp2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 kubectl -- exec busybox-7b57f96db7-67gp2 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 kubectl -- exec busybox-7b57f96db7-hxsnc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 kubectl -- exec busybox-7b57f96db7-hxsnc -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 kubectl -- exec busybox-7b57f96db7-q2g8z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 kubectl -- exec busybox-7b57f96db7-q2g8z -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (30.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-351359 node add --alsologtostderr -v 5: (28.973991247s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-351359 status --alsologtostderr -v 5: (1.073280014s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (30.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-351359 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.292242685s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-351359 status --output json --alsologtostderr -v 5: (1.098470227s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 cp testdata/cp-test.txt ha-351359:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 cp ha-351359:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile464845754/001/cp-test_ha-351359.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 cp ha-351359:/home/docker/cp-test.txt ha-351359-m02:/home/docker/cp-test_ha-351359_ha-351359-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359 "sudo cat /home/docker/cp-test.txt"
E1120 20:35:48.702904    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/functional-365934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:35:48.713886    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/functional-365934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:35:48.725222    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/functional-365934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:35:48.746758    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/functional-365934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:35:48.788284    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/functional-365934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359-m02 "sudo cat /home/docker/cp-test_ha-351359_ha-351359-m02.txt"
E1120 20:35:48.869756    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/functional-365934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:35:49.031144    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/functional-365934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 cp ha-351359:/home/docker/cp-test.txt ha-351359-m03:/home/docker/cp-test_ha-351359_ha-351359-m03.txt
E1120 20:35:49.353273    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/functional-365934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359-m03 "sudo cat /home/docker/cp-test_ha-351359_ha-351359-m03.txt"
E1120 20:35:49.994694    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/functional-365934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 cp ha-351359:/home/docker/cp-test.txt ha-351359-m04:/home/docker/cp-test_ha-351359_ha-351359-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359-m04 "sudo cat /home/docker/cp-test_ha-351359_ha-351359-m04.txt"
E1120 20:35:51.276131    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/functional-365934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 cp testdata/cp-test.txt ha-351359-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 cp ha-351359-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile464845754/001/cp-test_ha-351359-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 cp ha-351359-m02:/home/docker/cp-test.txt ha-351359:/home/docker/cp-test_ha-351359-m02_ha-351359.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359 "sudo cat /home/docker/cp-test_ha-351359-m02_ha-351359.txt"
E1120 20:35:53.837394    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/functional-365934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 cp ha-351359-m02:/home/docker/cp-test.txt ha-351359-m03:/home/docker/cp-test_ha-351359-m02_ha-351359-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359-m03 "sudo cat /home/docker/cp-test_ha-351359-m02_ha-351359-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 cp ha-351359-m02:/home/docker/cp-test.txt ha-351359-m04:/home/docker/cp-test_ha-351359-m02_ha-351359-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359-m04 "sudo cat /home/docker/cp-test_ha-351359-m02_ha-351359-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 cp testdata/cp-test.txt ha-351359-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 cp ha-351359-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile464845754/001/cp-test_ha-351359-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 cp ha-351359-m03:/home/docker/cp-test.txt ha-351359:/home/docker/cp-test_ha-351359-m03_ha-351359.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359 "sudo cat /home/docker/cp-test_ha-351359-m03_ha-351359.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 cp ha-351359-m03:/home/docker/cp-test.txt ha-351359-m02:/home/docker/cp-test_ha-351359-m03_ha-351359-m02.txt
E1120 20:35:58.958714    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/functional-365934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359-m02 "sudo cat /home/docker/cp-test_ha-351359-m03_ha-351359-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 cp ha-351359-m03:/home/docker/cp-test.txt ha-351359-m04:/home/docker/cp-test_ha-351359-m03_ha-351359-m04.txt
helpers_test.go:573: (dbg) Done: out/minikube-linux-arm64 -p ha-351359 cp ha-351359-m03:/home/docker/cp-test.txt ha-351359-m04:/home/docker/cp-test_ha-351359-m03_ha-351359-m04.txt: (1.072663753s)
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359-m04 "sudo cat /home/docker/cp-test_ha-351359-m03_ha-351359-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 cp testdata/cp-test.txt ha-351359-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 cp ha-351359-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile464845754/001/cp-test_ha-351359-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 cp ha-351359-m04:/home/docker/cp-test.txt ha-351359:/home/docker/cp-test_ha-351359-m04_ha-351359.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359 "sudo cat /home/docker/cp-test_ha-351359-m04_ha-351359.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 cp ha-351359-m04:/home/docker/cp-test.txt ha-351359-m02:/home/docker/cp-test_ha-351359-m04_ha-351359-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359-m02 "sudo cat /home/docker/cp-test_ha-351359-m04_ha-351359-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 cp ha-351359-m04:/home/docker/cp-test.txt ha-351359-m03:/home/docker/cp-test_ha-351359-m04_ha-351359-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 ssh -n ha-351359-m03 "sudo cat /home/docker/cp-test_ha-351359-m04_ha-351359-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 node stop m02 --alsologtostderr -v 5
E1120 20:36:09.200019    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/functional-365934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-351359 node stop m02 --alsologtostderr -v 5: (12.205498441s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-351359 status --alsologtostderr -v 5: exit status 7 (810.722094ms)

                                                
                                                
-- stdout --
	ha-351359
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-351359-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-351359-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-351359-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:36:18.478228   58090 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:36:18.478403   58090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:36:18.478415   58090 out.go:374] Setting ErrFile to fd 2...
	I1120 20:36:18.478420   58090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:36:18.478851   58090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-2300/.minikube/bin
	I1120 20:36:18.479107   58090 out.go:368] Setting JSON to false
	I1120 20:36:18.479153   58090 mustload.go:66] Loading cluster: ha-351359
	I1120 20:36:18.479255   58090 notify.go:221] Checking for updates...
	I1120 20:36:18.479782   58090 config.go:182] Loaded profile config "ha-351359": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:36:18.479803   58090 status.go:174] checking status of ha-351359 ...
	I1120 20:36:18.480845   58090 cli_runner.go:164] Run: docker container inspect ha-351359 --format={{.State.Status}}
	I1120 20:36:18.499637   58090 status.go:371] ha-351359 host status = "Running" (err=<nil>)
	I1120 20:36:18.499661   58090 host.go:66] Checking if "ha-351359" exists ...
	I1120 20:36:18.499967   58090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-351359
	I1120 20:36:18.529205   58090 host.go:66] Checking if "ha-351359" exists ...
	I1120 20:36:18.529496   58090 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:36:18.529545   58090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-351359
	I1120 20:36:18.551992   58090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/ha-351359/id_rsa Username:docker}
	I1120 20:36:18.664328   58090 ssh_runner.go:195] Run: systemctl --version
	I1120 20:36:18.671875   58090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:36:18.686305   58090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:36:18.752408   58090 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-20 20:36:18.742397273 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 20:36:18.752950   58090 kubeconfig.go:125] found "ha-351359" server: "https://192.168.49.254:8443"
	I1120 20:36:18.752989   58090 api_server.go:166] Checking apiserver status ...
	I1120 20:36:18.753036   58090 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 20:36:18.768320   58090 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1431/cgroup
	I1120 20:36:18.778198   58090 api_server.go:182] apiserver freezer: "7:freezer:/docker/f793ee6c4eaf6cf2e03b34826069043ed8e58e08457c7ef33160c21534ea6bdf/kubepods/burstable/pod74da71f45b062780f1266a378bd7c1b9/1f350c2edf123672e8112a3239e895628ebd60f37bbbb33baccc4a2c74bc732a"
	I1120 20:36:18.778281   58090 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f793ee6c4eaf6cf2e03b34826069043ed8e58e08457c7ef33160c21534ea6bdf/kubepods/burstable/pod74da71f45b062780f1266a378bd7c1b9/1f350c2edf123672e8112a3239e895628ebd60f37bbbb33baccc4a2c74bc732a/freezer.state
	I1120 20:36:18.788048   58090 api_server.go:204] freezer state: "THAWED"
	I1120 20:36:18.788074   58090 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1120 20:36:18.797598   58090 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1120 20:36:18.797628   58090 status.go:463] ha-351359 apiserver status = Running (err=<nil>)
	I1120 20:36:18.797638   58090 status.go:176] ha-351359 status: &{Name:ha-351359 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 20:36:18.797655   58090 status.go:174] checking status of ha-351359-m02 ...
	I1120 20:36:18.797968   58090 cli_runner.go:164] Run: docker container inspect ha-351359-m02 --format={{.State.Status}}
	I1120 20:36:18.830618   58090 status.go:371] ha-351359-m02 host status = "Stopped" (err=<nil>)
	I1120 20:36:18.830641   58090 status.go:384] host is not running, skipping remaining checks
	I1120 20:36:18.830648   58090 status.go:176] ha-351359-m02 status: &{Name:ha-351359-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 20:36:18.830669   58090 status.go:174] checking status of ha-351359-m03 ...
	I1120 20:36:18.831008   58090 cli_runner.go:164] Run: docker container inspect ha-351359-m03 --format={{.State.Status}}
	I1120 20:36:18.860800   58090 status.go:371] ha-351359-m03 host status = "Running" (err=<nil>)
	I1120 20:36:18.860823   58090 host.go:66] Checking if "ha-351359-m03" exists ...
	I1120 20:36:18.861114   58090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-351359-m03
	I1120 20:36:18.880960   58090 host.go:66] Checking if "ha-351359-m03" exists ...
	I1120 20:36:18.881278   58090 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:36:18.881328   58090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-351359-m03
	I1120 20:36:18.898617   58090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/ha-351359-m03/id_rsa Username:docker}
	I1120 20:36:19.003856   58090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:36:19.019727   58090 kubeconfig.go:125] found "ha-351359" server: "https://192.168.49.254:8443"
	I1120 20:36:19.019757   58090 api_server.go:166] Checking apiserver status ...
	I1120 20:36:19.019798   58090 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 20:36:19.038184   58090 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup
	I1120 20:36:19.048147   58090 api_server.go:182] apiserver freezer: "7:freezer:/docker/7efea8fa518b332650bcdbf864cb5c08d6096073a78a53d5cb88bb427558c0f6/kubepods/burstable/pod4dc41e0247f3433692302f145642777f/a1463f78525e1ac920fc5ca8c7fc83ae94171f7420492d9c86c332a87b04dd4c"
	I1120 20:36:19.048215   58090 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7efea8fa518b332650bcdbf864cb5c08d6096073a78a53d5cb88bb427558c0f6/kubepods/burstable/pod4dc41e0247f3433692302f145642777f/a1463f78525e1ac920fc5ca8c7fc83ae94171f7420492d9c86c332a87b04dd4c/freezer.state
	I1120 20:36:19.057385   58090 api_server.go:204] freezer state: "THAWED"
	I1120 20:36:19.057421   58090 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1120 20:36:19.065592   58090 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1120 20:36:19.065619   58090 status.go:463] ha-351359-m03 apiserver status = Running (err=<nil>)
	I1120 20:36:19.065629   58090 status.go:176] ha-351359-m03 status: &{Name:ha-351359-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 20:36:19.065673   58090 status.go:174] checking status of ha-351359-m04 ...
	I1120 20:36:19.066010   58090 cli_runner.go:164] Run: docker container inspect ha-351359-m04 --format={{.State.Status}}
	I1120 20:36:19.084702   58090 status.go:371] ha-351359-m04 host status = "Running" (err=<nil>)
	I1120 20:36:19.084727   58090 host.go:66] Checking if "ha-351359-m04" exists ...
	I1120 20:36:19.085020   58090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-351359-m04
	I1120 20:36:19.102762   58090 host.go:66] Checking if "ha-351359-m04" exists ...
	I1120 20:36:19.103070   58090 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:36:19.103115   58090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-351359-m04
	I1120 20:36:19.121823   58090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/ha-351359-m04/id_rsa Username:docker}
	I1120 20:36:19.219668   58090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:36:19.232893   58090 status.go:176] ha-351359-m04 status: &{Name:ha-351359-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (13.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 node start m02 --alsologtostderr -v 5
E1120 20:36:29.681985    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/functional-365934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-351359 node start m02 --alsologtostderr -v 5: (12.424166166s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-351359 status --alsologtostderr -v 5: (1.376282439s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (13.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.223817638s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (106.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 stop --alsologtostderr -v 5
E1120 20:37:10.643354    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/functional-365934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-351359 stop --alsologtostderr -v 5: (37.772789163s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-351359 start --wait true --alsologtostderr -v 5: (1m8.175792254s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (106.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-351359 node delete m03 --alsologtostderr -v 5: (10.113982049s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E1120 20:38:32.565348    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/functional-365934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 stop --alsologtostderr -v 5
E1120 20:38:58.758919    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/addons-657501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-351359 stop --alsologtostderr -v 5: (36.357695699s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-351359 status --alsologtostderr -v 5: exit status 7 (111.024611ms)

                                                
                                                
-- stdout --
	ha-351359
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-351359-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-351359-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:39:09.644462   73021 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:39:09.644575   73021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:39:09.644585   73021 out.go:374] Setting ErrFile to fd 2...
	I1120 20:39:09.644590   73021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:39:09.644844   73021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-2300/.minikube/bin
	I1120 20:39:09.645026   73021 out.go:368] Setting JSON to false
	I1120 20:39:09.645065   73021 mustload.go:66] Loading cluster: ha-351359
	I1120 20:39:09.645125   73021 notify.go:221] Checking for updates...
	I1120 20:39:09.646312   73021 config.go:182] Loaded profile config "ha-351359": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:39:09.646339   73021 status.go:174] checking status of ha-351359 ...
	I1120 20:39:09.647026   73021 cli_runner.go:164] Run: docker container inspect ha-351359 --format={{.State.Status}}
	I1120 20:39:09.665381   73021 status.go:371] ha-351359 host status = "Stopped" (err=<nil>)
	I1120 20:39:09.665404   73021 status.go:384] host is not running, skipping remaining checks
	I1120 20:39:09.665412   73021 status.go:176] ha-351359 status: &{Name:ha-351359 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 20:39:09.665441   73021 status.go:174] checking status of ha-351359-m02 ...
	I1120 20:39:09.665732   73021 cli_runner.go:164] Run: docker container inspect ha-351359-m02 --format={{.State.Status}}
	I1120 20:39:09.690523   73021 status.go:371] ha-351359-m02 host status = "Stopped" (err=<nil>)
	I1120 20:39:09.690590   73021 status.go:384] host is not running, skipping remaining checks
	I1120 20:39:09.690611   73021 status.go:176] ha-351359-m02 status: &{Name:ha-351359-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 20:39:09.690636   73021 status.go:174] checking status of ha-351359-m04 ...
	I1120 20:39:09.690935   73021 cli_runner.go:164] Run: docker container inspect ha-351359-m04 --format={{.State.Status}}
	I1120 20:39:09.707735   73021 status.go:371] ha-351359-m04 host status = "Stopped" (err=<nil>)
	I1120 20:39:09.707771   73021 status.go:384] host is not running, skipping remaining checks
	I1120 20:39:09.707779   73021 status.go:176] ha-351359-m04 status: &{Name:ha-351359-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (60.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-351359 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (59.240974635s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (60.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (85.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 node add --control-plane --alsologtostderr -v 5
E1120 20:40:48.703142    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/functional-365934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:41:16.407120    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/functional-365934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-351359 node add --control-plane --alsologtostderr -v 5: (1m24.42702388s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-351359 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-351359 status --alsologtostderr -v 5: (1.14154571s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (85.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.053117027s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.05s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.34s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-790735 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-790735 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (1m22.33248463s)
--- PASS: TestJSONOutput/start/Command (82.34s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-790735 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-790735 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.03s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-790735 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-790735 --output=json --user=testUser: (6.026986399s)
--- PASS: TestJSONOutput/stop/Command (6.03s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-258456 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-258456 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (95.134581ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"15e5a50d-eebd-4d6c-8bcc-160714adaf2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-258456] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bef0ba30-9213-4635-9924-30caa6e970c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21923"}}
	{"specversion":"1.0","id":"d5b77b04-1bab-4602-9ee3-321befc28d2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ecee41a3-fd3b-48c5-97b2-2b5513642c65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21923-2300/kubeconfig"}}
	{"specversion":"1.0","id":"f3aba1f9-b5f1-458b-9e9e-77ec11c5c066","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-2300/.minikube"}}
	{"specversion":"1.0","id":"2e5e8960-eedf-4c5d-82c2-e57e567b9cf3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"44dd1ce2-e376-4356-a8b8-50d9820a20d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7bd51503-2c17-4f87-918f-268999b96049","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-258456" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-258456
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (72.25s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-315865 --network=
E1120 20:43:58.759004    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/addons-657501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-315865 --network=: (1m10.049840186s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-315865" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-315865
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-315865: (2.173535897s)
--- PASS: TestKicCustomNetwork/create_custom_network (72.25s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (39.63s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-827943 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-827943 --network=bridge: (37.46872374s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-827943" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-827943
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-827943: (2.134702815s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (39.63s)

                                                
                                    
x
+
TestKicExistingNetwork (39.71s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1120 20:45:13.401576    4089 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1120 20:45:13.417655    4089 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1120 20:45:13.417731    4089 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1120 20:45:13.417748    4089 cli_runner.go:164] Run: docker network inspect existing-network
W1120 20:45:13.434054    4089 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1120 20:45:13.434087    4089 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1120 20:45:13.434101    4089 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1120 20:45:13.434229    4089 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1120 20:45:13.453554    4089 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8f2399b7fac6 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:ce:e1:0f:d8:b1} reservation:<nil>}
I1120 20:45:13.453898    4089 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d932a0}
I1120 20:45:13.453921    4089 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1120 20:45:13.453973    4089 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1120 20:45:13.513657    4089 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-691494 --network=existing-network
E1120 20:45:21.828943    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/addons-657501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:45:48.709186    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/functional-365934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-691494 --network=existing-network: (37.355012597s)
helpers_test.go:175: Cleaning up "existing-network-691494" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-691494
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-691494: (2.206687314s)
I1120 20:45:53.092621    4089 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (39.71s)

                                                
                                    
x
+
TestKicCustomSubnet (36.24s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-242697 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-242697 --subnet=192.168.60.0/24: (34.035315171s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-242697 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-242697" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-242697
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-242697: (2.180811378s)
--- PASS: TestKicCustomSubnet (36.24s)

                                                
                                    
x
+
TestKicStaticIP (39.21s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-186485 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-186485 --static-ip=192.168.200.200: (36.766791568s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-186485 ip
helpers_test.go:175: Cleaning up "static-ip-186485" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-186485
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-186485: (2.283905928s)
--- PASS: TestKicStaticIP (39.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (74.28s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-417483 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-417483 --driver=docker  --container-runtime=containerd: (32.767860137s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-420240 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-420240 --driver=docker  --container-runtime=containerd: (35.784482739s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-417483
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-420240
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-420240" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-420240
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-420240: (2.281697515s)
helpers_test.go:175: Cleaning up "first-417483" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-417483
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-417483: (2.011662931s)
--- PASS: TestMinikubeProfile (74.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.57s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-422323 --memory=3072 --mount-string /tmp/TestMountStartserial85328550/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-422323 --memory=3072 --mount-string /tmp/TestMountStartserial85328550/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.565292072s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-422323 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.79s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-424375 --memory=3072 --mount-string /tmp/TestMountStartserial85328550/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-424375 --memory=3072 --mount-string /tmp/TestMountStartserial85328550/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.789777053s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-424375 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-422323 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-422323 --alsologtostderr -v=5: (1.702614742s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-424375 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-424375
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-424375: (1.285004155s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.54s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-424375
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-424375: (6.543050083s)
--- PASS: TestMountStart/serial/RestartStopped (7.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-424375 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (107.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-907803 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1120 20:48:58.759231    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/addons-657501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-907803 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m47.036113276s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (107.56s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-907803 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-907803 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-907803 -- rollout status deployment/busybox: (3.158677339s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-907803 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-907803 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-907803 -- exec busybox-7b57f96db7-ctgxg -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-907803 -- exec busybox-7b57f96db7-mz29p -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-907803 -- exec busybox-7b57f96db7-ctgxg -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-907803 -- exec busybox-7b57f96db7-mz29p -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-907803 -- exec busybox-7b57f96db7-ctgxg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-907803 -- exec busybox-7b57f96db7-mz29p -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.16s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-907803 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-907803 -- exec busybox-7b57f96db7-ctgxg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-907803 -- exec busybox-7b57f96db7-ctgxg -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-907803 -- exec busybox-7b57f96db7-mz29p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-907803 -- exec busybox-7b57f96db7-mz29p -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-907803 -v=5 --alsologtostderr
E1120 20:50:48.702887    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/functional-365934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-907803 -v=5 --alsologtostderr: (27.287730222s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.04s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-907803 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 cp testdata/cp-test.txt multinode-907803:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 ssh -n multinode-907803 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 cp multinode-907803:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2832298088/001/cp-test_multinode-907803.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 ssh -n multinode-907803 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 cp multinode-907803:/home/docker/cp-test.txt multinode-907803-m02:/home/docker/cp-test_multinode-907803_multinode-907803-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 ssh -n multinode-907803 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 ssh -n multinode-907803-m02 "sudo cat /home/docker/cp-test_multinode-907803_multinode-907803-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 cp multinode-907803:/home/docker/cp-test.txt multinode-907803-m03:/home/docker/cp-test_multinode-907803_multinode-907803-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 ssh -n multinode-907803 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 ssh -n multinode-907803-m03 "sudo cat /home/docker/cp-test_multinode-907803_multinode-907803-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 cp testdata/cp-test.txt multinode-907803-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 ssh -n multinode-907803-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 cp multinode-907803-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2832298088/001/cp-test_multinode-907803-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 ssh -n multinode-907803-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 cp multinode-907803-m02:/home/docker/cp-test.txt multinode-907803:/home/docker/cp-test_multinode-907803-m02_multinode-907803.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 ssh -n multinode-907803-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 ssh -n multinode-907803 "sudo cat /home/docker/cp-test_multinode-907803-m02_multinode-907803.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 cp multinode-907803-m02:/home/docker/cp-test.txt multinode-907803-m03:/home/docker/cp-test_multinode-907803-m02_multinode-907803-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 ssh -n multinode-907803-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 ssh -n multinode-907803-m03 "sudo cat /home/docker/cp-test_multinode-907803-m02_multinode-907803-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 cp testdata/cp-test.txt multinode-907803-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 ssh -n multinode-907803-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 cp multinode-907803-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2832298088/001/cp-test_multinode-907803-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 ssh -n multinode-907803-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 cp multinode-907803-m03:/home/docker/cp-test.txt multinode-907803:/home/docker/cp-test_multinode-907803-m03_multinode-907803.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 ssh -n multinode-907803-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 ssh -n multinode-907803 "sudo cat /home/docker/cp-test_multinode-907803-m03_multinode-907803.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 cp multinode-907803-m03:/home/docker/cp-test.txt multinode-907803-m02:/home/docker/cp-test_multinode-907803-m03_multinode-907803-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 ssh -n multinode-907803-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 ssh -n multinode-907803-m02 "sudo cat /home/docker/cp-test_multinode-907803-m03_multinode-907803-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-907803 node stop m03: (1.329259196s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-907803 status: exit status 7 (535.678851ms)

                                                
                                                
-- stdout --
	multinode-907803
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-907803-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-907803-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-907803 status --alsologtostderr: exit status 7 (545.214746ms)

                                                
                                                
-- stdout --
	multinode-907803
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-907803-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-907803-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:51:28.608323  126313 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:51:28.608457  126313 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:51:28.608469  126313 out.go:374] Setting ErrFile to fd 2...
	I1120 20:51:28.608489  126313 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:51:28.608766  126313 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-2300/.minikube/bin
	I1120 20:51:28.608982  126313 out.go:368] Setting JSON to false
	I1120 20:51:28.609030  126313 mustload.go:66] Loading cluster: multinode-907803
	I1120 20:51:28.609108  126313 notify.go:221] Checking for updates...
	I1120 20:51:28.609497  126313 config.go:182] Loaded profile config "multinode-907803": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:51:28.609518  126313 status.go:174] checking status of multinode-907803 ...
	I1120 20:51:28.610621  126313 cli_runner.go:164] Run: docker container inspect multinode-907803 --format={{.State.Status}}
	I1120 20:51:28.630583  126313 status.go:371] multinode-907803 host status = "Running" (err=<nil>)
	I1120 20:51:28.630608  126313 host.go:66] Checking if "multinode-907803" exists ...
	I1120 20:51:28.630938  126313 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-907803
	I1120 20:51:28.651428  126313 host.go:66] Checking if "multinode-907803" exists ...
	I1120 20:51:28.651731  126313 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:51:28.651791  126313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-907803
	I1120 20:51:28.685990  126313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/multinode-907803/id_rsa Username:docker}
	I1120 20:51:28.792285  126313 ssh_runner.go:195] Run: systemctl --version
	I1120 20:51:28.798568  126313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:51:28.812642  126313 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:51:28.871018  126313 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-20 20:51:28.860126872 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 20:51:28.871651  126313 kubeconfig.go:125] found "multinode-907803" server: "https://192.168.67.2:8443"
	I1120 20:51:28.871694  126313 api_server.go:166] Checking apiserver status ...
	I1120 20:51:28.871746  126313 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 20:51:28.884895  126313 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1432/cgroup
	I1120 20:51:28.893688  126313 api_server.go:182] apiserver freezer: "7:freezer:/docker/7f658104842287bf2f23e206d6f60f3b922b1a885cc9d2ecfd6adf66aaa1771f/kubepods/burstable/podf8296581ee357613ed1d185d3f209b6d/50a37213ebde6a940f7a59a69f98ff1ce9eaa2534cb629655721ecf92fdcf7ba"
	I1120 20:51:28.893763  126313 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7f658104842287bf2f23e206d6f60f3b922b1a885cc9d2ecfd6adf66aaa1771f/kubepods/burstable/podf8296581ee357613ed1d185d3f209b6d/50a37213ebde6a940f7a59a69f98ff1ce9eaa2534cb629655721ecf92fdcf7ba/freezer.state
	I1120 20:51:28.901556  126313 api_server.go:204] freezer state: "THAWED"
	I1120 20:51:28.901589  126313 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1120 20:51:28.909976  126313 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1120 20:51:28.910004  126313 status.go:463] multinode-907803 apiserver status = Running (err=<nil>)
	I1120 20:51:28.910015  126313 status.go:176] multinode-907803 status: &{Name:multinode-907803 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 20:51:28.910033  126313 status.go:174] checking status of multinode-907803-m02 ...
	I1120 20:51:28.910347  126313 cli_runner.go:164] Run: docker container inspect multinode-907803-m02 --format={{.State.Status}}
	I1120 20:51:28.927837  126313 status.go:371] multinode-907803-m02 host status = "Running" (err=<nil>)
	I1120 20:51:28.927864  126313 host.go:66] Checking if "multinode-907803-m02" exists ...
	I1120 20:51:28.928175  126313 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-907803-m02
	I1120 20:51:28.946262  126313 host.go:66] Checking if "multinode-907803-m02" exists ...
	I1120 20:51:28.947286  126313 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:51:28.947339  126313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-907803-m02
	I1120 20:51:28.965742  126313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21923-2300/.minikube/machines/multinode-907803-m02/id_rsa Username:docker}
	I1120 20:51:29.067830  126313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:51:29.082327  126313 status.go:176] multinode-907803-m02 status: &{Name:multinode-907803-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1120 20:51:29.082362  126313 status.go:174] checking status of multinode-907803-m03 ...
	I1120 20:51:29.082718  126313 cli_runner.go:164] Run: docker container inspect multinode-907803-m03 --format={{.State.Status}}
	I1120 20:51:29.100025  126313 status.go:371] multinode-907803-m03 host status = "Stopped" (err=<nil>)
	I1120 20:51:29.100050  126313 status.go:384] host is not running, skipping remaining checks
	I1120 20:51:29.100056  126313 status.go:176] multinode-907803-m03 status: &{Name:multinode-907803-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.41s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-907803 node start m03 -v=5 --alsologtostderr: (7.339318616s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (76.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-907803
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-907803
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-907803: (25.190255103s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-907803 --wait=true -v=5 --alsologtostderr
E1120 20:52:11.768673    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/functional-365934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-907803 --wait=true -v=5 --alsologtostderr: (51.425559983s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-907803
--- PASS: TestMultiNode/serial/RestartKeepsNodes (76.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-907803 node delete m03: (4.991594106s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.81s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-907803 stop: (24.074562755s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-907803 status: exit status 7 (94.249672ms)

                                                
                                                
-- stdout --
	multinode-907803
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-907803-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-907803 status --alsologtostderr: exit status 7 (99.325354ms)

                                                
                                                
-- stdout --
	multinode-907803
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-907803-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:53:24.027266  135123 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:53:24.027520  135123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:53:24.027553  135123 out.go:374] Setting ErrFile to fd 2...
	I1120 20:53:24.027573  135123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:53:24.027865  135123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-2300/.minikube/bin
	I1120 20:53:24.028121  135123 out.go:368] Setting JSON to false
	I1120 20:53:24.028210  135123 mustload.go:66] Loading cluster: multinode-907803
	I1120 20:53:24.028286  135123 notify.go:221] Checking for updates...
	I1120 20:53:24.028676  135123 config.go:182] Loaded profile config "multinode-907803": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:53:24.028717  135123 status.go:174] checking status of multinode-907803 ...
	I1120 20:53:24.029270  135123 cli_runner.go:164] Run: docker container inspect multinode-907803 --format={{.State.Status}}
	I1120 20:53:24.050765  135123 status.go:371] multinode-907803 host status = "Stopped" (err=<nil>)
	I1120 20:53:24.050787  135123 status.go:384] host is not running, skipping remaining checks
	I1120 20:53:24.050794  135123 status.go:176] multinode-907803 status: &{Name:multinode-907803 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 20:53:24.050834  135123 status.go:174] checking status of multinode-907803-m02 ...
	I1120 20:53:24.051137  135123 cli_runner.go:164] Run: docker container inspect multinode-907803-m02 --format={{.State.Status}}
	I1120 20:53:24.075389  135123 status.go:371] multinode-907803-m02 host status = "Stopped" (err=<nil>)
	I1120 20:53:24.075417  135123 status.go:384] host is not running, skipping remaining checks
	I1120 20:53:24.075423  135123 status.go:176] multinode-907803-m02 status: &{Name:multinode-907803-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.27s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-907803 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1120 20:53:58.759839    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/addons-657501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-907803 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (48.865491053s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907803 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.57s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-907803
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-907803-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-907803-m02 --driver=docker  --container-runtime=containerd: exit status 14 (94.046998ms)

                                                
                                                
-- stdout --
	* [multinode-907803-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-2300/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-2300/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-907803-m02' is duplicated with machine name 'multinode-907803-m02' in profile 'multinode-907803'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-907803-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-907803-m03 --driver=docker  --container-runtime=containerd: (38.071730681s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-907803
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-907803: exit status 80 (356.581136ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-907803 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-907803-m03 already exists in multinode-907803-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-907803-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-907803-m03: (2.166003715s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.75s)

                                                
                                    
x
+
TestPreload (133.78s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-533302 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
E1120 20:55:48.702842    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/functional-365934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-533302 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (1m2.018968487s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-533302 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-533302 image pull gcr.io/k8s-minikube/busybox: (2.322061959s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-533302
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-533302: (5.884875575s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-533302 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-533302 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (1m0.856522144s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-533302 image list
helpers_test.go:175: Cleaning up "test-preload-533302" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-533302
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-533302: (2.476762983s)
--- PASS: TestPreload (133.78s)

                                                
                                    
x
+
TestScheduledStopUnix (110.61s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-715947 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-715947 --memory=3072 --driver=docker  --container-runtime=containerd: (34.542258676s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-715947 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1120 20:57:46.984644  151031 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:57:46.984854  151031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:57:46.984886  151031 out.go:374] Setting ErrFile to fd 2...
	I1120 20:57:46.984903  151031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:57:46.985272  151031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-2300/.minikube/bin
	I1120 20:57:46.985616  151031 out.go:368] Setting JSON to false
	I1120 20:57:46.985782  151031 mustload.go:66] Loading cluster: scheduled-stop-715947
	I1120 20:57:46.986918  151031 config.go:182] Loaded profile config "scheduled-stop-715947": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:57:46.987052  151031 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/scheduled-stop-715947/config.json ...
	I1120 20:57:46.987291  151031 mustload.go:66] Loading cluster: scheduled-stop-715947
	I1120 20:57:46.987452  151031 config.go:182] Loaded profile config "scheduled-stop-715947": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-715947 -n scheduled-stop-715947
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-715947 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1120 20:57:47.459549  151121 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:57:47.459784  151121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:57:47.459815  151121 out.go:374] Setting ErrFile to fd 2...
	I1120 20:57:47.459835  151121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:57:47.460220  151121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-2300/.minikube/bin
	I1120 20:57:47.460521  151121 out.go:368] Setting JSON to false
	I1120 20:57:47.460762  151121 daemonize_unix.go:73] killing process 151053 as it is an old scheduled stop
	I1120 20:57:47.460919  151121 mustload.go:66] Loading cluster: scheduled-stop-715947
	I1120 20:57:47.461397  151121 config.go:182] Loaded profile config "scheduled-stop-715947": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:57:47.461525  151121 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/scheduled-stop-715947/config.json ...
	I1120 20:57:47.461745  151121 mustload.go:66] Loading cluster: scheduled-stop-715947
	I1120 20:57:47.461905  151121 config.go:182] Loaded profile config "scheduled-stop-715947": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1120 20:57:47.469419    4089 retry.go:31] will retry after 77.172µs: open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/scheduled-stop-715947/pid: no such file or directory
I1120 20:57:47.472173    4089 retry.go:31] will retry after 124.609µs: open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/scheduled-stop-715947/pid: no such file or directory
I1120 20:57:47.473305    4089 retry.go:31] will retry after 271.205µs: open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/scheduled-stop-715947/pid: no such file or directory
I1120 20:57:47.474387    4089 retry.go:31] will retry after 348.883µs: open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/scheduled-stop-715947/pid: no such file or directory
I1120 20:57:47.475483    4089 retry.go:31] will retry after 734.653µs: open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/scheduled-stop-715947/pid: no such file or directory
I1120 20:57:47.476580    4089 retry.go:31] will retry after 682.037µs: open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/scheduled-stop-715947/pid: no such file or directory
I1120 20:57:47.477697    4089 retry.go:31] will retry after 1.636818ms: open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/scheduled-stop-715947/pid: no such file or directory
I1120 20:57:47.479838    4089 retry.go:31] will retry after 2.475038ms: open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/scheduled-stop-715947/pid: no such file or directory
I1120 20:57:47.483035    4089 retry.go:31] will retry after 3.79683ms: open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/scheduled-stop-715947/pid: no such file or directory
I1120 20:57:47.487308    4089 retry.go:31] will retry after 4.055502ms: open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/scheduled-stop-715947/pid: no such file or directory
I1120 20:57:47.491460    4089 retry.go:31] will retry after 3.262684ms: open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/scheduled-stop-715947/pid: no such file or directory
I1120 20:57:47.497523    4089 retry.go:31] will retry after 9.659447ms: open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/scheduled-stop-715947/pid: no such file or directory
I1120 20:57:47.508516    4089 retry.go:31] will retry after 9.864308ms: open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/scheduled-stop-715947/pid: no such file or directory
I1120 20:57:47.518710    4089 retry.go:31] will retry after 22.052618ms: open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/scheduled-stop-715947/pid: no such file or directory
I1120 20:57:47.540893    4089 retry.go:31] will retry after 16.132744ms: open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/scheduled-stop-715947/pid: no such file or directory
I1120 20:57:47.558123    4089 retry.go:31] will retry after 53.252908ms: open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/scheduled-stop-715947/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-715947 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-715947 -n scheduled-stop-715947
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-715947
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-715947 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1120 20:58:13.425921  151809 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:58:13.426050  151809 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:58:13.426061  151809 out.go:374] Setting ErrFile to fd 2...
	I1120 20:58:13.426066  151809 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:58:13.426293  151809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-2300/.minikube/bin
	I1120 20:58:13.426573  151809 out.go:368] Setting JSON to false
	I1120 20:58:13.426665  151809 mustload.go:66] Loading cluster: scheduled-stop-715947
	I1120 20:58:13.427008  151809 config.go:182] Loaded profile config "scheduled-stop-715947": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 20:58:13.427077  151809 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/scheduled-stop-715947/config.json ...
	I1120 20:58:13.427252  151809 mustload.go:66] Loading cluster: scheduled-stop-715947
	I1120 20:58:13.427362  151809 config.go:182] Loaded profile config "scheduled-stop-715947": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-715947
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-715947: exit status 7 (64.40762ms)

                                                
                                                
-- stdout --
	scheduled-stop-715947
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-715947 -n scheduled-stop-715947
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-715947 -n scheduled-stop-715947: exit status 7 (70.098192ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-715947" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-715947
E1120 20:58:58.759325    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/addons-657501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-715947: (4.424570731s)
--- PASS: TestScheduledStopUnix (110.61s)

                                                
                                    
x
+
TestInsufficientStorage (10.54s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-083550 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-083550 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.965841435s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"afe3b28e-eb2f-4530-9b95-4b7a00492536","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-083550] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3e995e17-33dd-4035-b1cf-9b6595c0ced4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21923"}}
	{"specversion":"1.0","id":"7aae4c68-91f4-49ce-ad8a-9a214483c09e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ab068d53-8099-474f-8c47-9146313861c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21923-2300/kubeconfig"}}
	{"specversion":"1.0","id":"f8f54d15-3ae4-4848-89ba-e4efd1039cad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-2300/.minikube"}}
	{"specversion":"1.0","id":"22ec6736-bdbf-4188-a19f-c5e9e1b77131","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e91e6d68-8055-463e-97cb-f2b879b90010","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0623bebb-18e9-44e2-89da-416c23dfb6b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"11ee2629-f425-4096-be42-b5a9c53f8082","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8cfa18c2-951c-42ab-b9b5-d372fd634b4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f2320fac-9e89-4c63-95f7-7961c728f971","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e50f4dfd-f3bb-487e-8dfc-7ae0e239243a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-083550\" primary control-plane node in \"insufficient-storage-083550\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"958b86b7-786c-4d5c-bd7e-34fc28f5b0d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763507788-21924 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"80fc066f-7446-434b-a90c-4610006e0ad6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"e111ff45-25d4-4394-98d1-8dbca8a19abf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-083550 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-083550 --output=json --layout=cluster: exit status 7 (302.031477ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-083550","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-083550","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1120 20:59:11.259801  153639 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-083550" does not appear in /home/jenkins/minikube-integration/21923-2300/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-083550 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-083550 --output=json --layout=cluster: exit status 7 (303.525227ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-083550","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-083550","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1120 20:59:11.563849  153706 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-083550" does not appear in /home/jenkins/minikube-integration/21923-2300/kubeconfig
	E1120 20:59:11.573787  153706 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/insufficient-storage-083550/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-083550" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-083550
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-083550: (1.962333292s)
--- PASS: TestInsufficientStorage (10.54s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (70.62s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.4136238428 start -p running-upgrade-153928 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.4136238428 start -p running-upgrade-153928 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (31.542785203s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-153928 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-153928 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (26.764467169s)
helpers_test.go:175: Cleaning up "running-upgrade-153928" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-153928
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-153928: (2.134841094s)
--- PASS: TestRunningBinaryUpgrade (70.62s)

                                                
                                    
x
+
TestKubernetesUpgrade (350.93s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-982573 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1120 21:00:48.702951    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/functional-365934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-982573 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.442972077s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-982573
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-982573: (1.347778395s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-982573 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-982573 status --format={{.Host}}: exit status 7 (66.213248ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-982573 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-982573 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m53.957554831s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-982573 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-982573 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-982573 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (106.451272ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-982573] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-2300/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-2300/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-982573
	    minikube start -p kubernetes-upgrade-982573 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9825732 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-982573 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-982573 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-982573 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (15.475917922s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-982573" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-982573
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-982573: (2.384987709s)
--- PASS: TestKubernetesUpgrade (350.93s)

                                                
                                    
x
+
TestMissingContainerUpgrade (148.82s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1527714805 start -p missing-upgrade-311816 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1527714805 start -p missing-upgrade-311816 --memory=3072 --driver=docker  --container-runtime=containerd: (1m6.283766052s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-311816
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-311816
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-311816 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-311816 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m11.363245532s)
helpers_test.go:175: Cleaning up "missing-upgrade-311816" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-311816
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-311816: (2.279114424s)
--- PASS: TestMissingContainerUpgrade (148.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-621916 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-621916 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (99.248213ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-621916] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-2300/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-2300/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (45.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-621916 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-621916 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (44.629958617s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-621916 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (45.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (20.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-621916 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-621916 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (17.760789369s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-621916 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-621916 status -o json: exit status 2 (442.006421ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-621916","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-621916
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-621916: (2.443211817s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (20.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-621916 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-621916 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (10.323015154s)
--- PASS: TestNoKubernetes/serial/Start (10.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21923-2300/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-621916 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-621916 "sudo systemctl is-active --quiet service kubelet": exit status 1 (284.05893ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-621916
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-621916: (1.291976071s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-621916 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-621916 --driver=docker  --container-runtime=containerd: (6.595685777s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-621916 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-621916 "sudo systemctl is-active --quiet service kubelet": exit status 1 (269.76938ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (8.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (8.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (52.38s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1312712369 start -p stopped-upgrade-658911 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1120 21:02:01.831024    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/addons-657501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1312712369 start -p stopped-upgrade-658911 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (29.584299081s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1312712369 -p stopped-upgrade-658911 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1312712369 -p stopped-upgrade-658911 stop: (1.245959785s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-658911 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-658911 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (21.551760522s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (52.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-658911
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-658911: (1.392882025s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.39s)

                                                
                                    
x
+
TestPause/serial/Start (83.4s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-183631 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E1120 21:03:58.759438    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/addons-657501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-183631 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m23.402665348s)
--- PASS: TestPause/serial/Start (83.40s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.94s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-183631 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-183631 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.923496868s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.94s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-183631 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-183631 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-183631 --output=json --layout=cluster: exit status 2 (333.884309ms)

                                                
                                                
-- stdout --
	{"Name":"pause-183631","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-183631","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.63s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-183631 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.63s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.07s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-183631 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-183631 --alsologtostderr -v=5: (1.068523952s)
--- PASS: TestPause/serial/PauseAgain (1.07s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.99s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-183631 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-183631 --alsologtostderr -v=5: (2.992692316s)
--- PASS: TestPause/serial/DeletePaused (2.99s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.53s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-183631
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-183631: exit status 1 (25.985096ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-183631: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.53s)
E1120 21:05:48.703073    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/functional-365934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-448616 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-448616 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (286.500319ms)

                                                
                                                
-- stdout --
	* [false-448616] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-2300/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-2300/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:06:19.045618  194489 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:06:19.045840  194489 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:06:19.045867  194489 out.go:374] Setting ErrFile to fd 2...
	I1120 21:06:19.045887  194489 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:06:19.046204  194489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-2300/.minikube/bin
	I1120 21:06:19.050826  194489 out.go:368] Setting JSON to false
	I1120 21:06:19.051867  194489 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":2928,"bootTime":1763669851,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1120 21:06:19.051978  194489 start.go:143] virtualization:  
	I1120 21:06:19.055675  194489 out.go:179] * [false-448616] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1120 21:06:19.058971  194489 notify.go:221] Checking for updates...
	I1120 21:06:19.062117  194489 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:06:19.065239  194489 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:06:19.068107  194489 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-2300/kubeconfig
	I1120 21:06:19.071038  194489 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-2300/.minikube
	I1120 21:06:19.073935  194489 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1120 21:06:19.076891  194489 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:06:19.080345  194489 config.go:182] Loaded profile config "kubernetes-upgrade-982573": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1120 21:06:19.080441  194489 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:06:19.143826  194489 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1120 21:06:19.143961  194489 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:06:19.223978  194489 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-20 21:06:19.212314283 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1120 21:06:19.224084  194489 docker.go:319] overlay module found
	I1120 21:06:19.227325  194489 out.go:179] * Using the docker driver based on user configuration
	I1120 21:06:19.230150  194489 start.go:309] selected driver: docker
	I1120 21:06:19.230168  194489 start.go:930] validating driver "docker" against <nil>
	I1120 21:06:19.230182  194489 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:06:19.234100  194489 out.go:203] 
	W1120 21:06:19.237181  194489 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1120 21:06:19.240263  194489 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-448616 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-448616

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-448616

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-448616

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-448616

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-448616

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-448616

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-448616

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-448616

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-448616

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-448616

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-448616

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-448616" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-448616" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21923-2300/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 20 Nov 2025 21:06:12 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-982573
contexts:
- context:
cluster: kubernetes-upgrade-982573
extensions:
- extension:
last-update: Thu, 20 Nov 2025 21:06:12 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-982573
name: kubernetes-upgrade-982573
current-context: kubernetes-upgrade-982573
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-982573
user:
client-certificate: /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/kubernetes-upgrade-982573/client.crt
client-key: /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/kubernetes-upgrade-982573/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-448616

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-448616"

                                                
                                                
----------------------- debugLogs end: false-448616 [took: 4.568560494s] --------------------------------
helpers_test.go:175: Cleaning up "false-448616" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-448616
--- PASS: TestNetworkPlugins/group/false (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (63.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-023521 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1120 21:08:51.770957    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/functional-365934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-023521 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m3.868517336s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (63.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-023521 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-023521 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.075378102s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-023521 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-023521 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-023521 --alsologtostderr -v=3: (12.144654466s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-023521 -n old-k8s-version-023521
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-023521 -n old-k8s-version-023521: exit status 7 (73.488024ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-023521 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-023521 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-023521 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (51.265555642s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-023521 -n old-k8s-version-023521
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-xp6b5" [444f2927-2973-434a-af0e-dbc65888ce21] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.029345637s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-xp6b5" [444f2927-2973-434a-af0e-dbc65888ce21] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004466267s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-023521 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-023521 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (78.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-882483 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-882483 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m18.824342758s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (78.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-023521 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-023521 --alsologtostderr -v=1: (1.216091371s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-023521 -n old-k8s-version-023521
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-023521 -n old-k8s-version-023521: exit status 2 (613.304297ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-023521 -n old-k8s-version-023521
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-023521 -n old-k8s-version-023521: exit status 2 (708.061534ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-023521 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-023521 -n old-k8s-version-023521
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-023521 -n old-k8s-version-023521
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (89.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-121127 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1120 21:10:48.703062    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/functional-365934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-121127 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m29.441347078s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (89.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-882483 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-882483 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-882483 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-882483 --alsologtostderr -v=3: (12.77820207s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-882483 -n no-preload-882483
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-882483 -n no-preload-882483: exit status 7 (89.195119ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-882483 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (50.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-882483 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-882483 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (49.739539538s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-882483 -n no-preload-882483
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (50.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-121127 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-121127 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.310824661s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-121127 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-121127 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-121127 --alsologtostderr -v=3: (12.466598668s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-121127 -n embed-certs-121127
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-121127 -n embed-certs-121127: exit status 7 (83.328861ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-121127 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-121127 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-121127 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (51.11909496s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-121127 -n embed-certs-121127
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9rrw5" [af40571a-aca7-4c18-8d67-e29063d55de8] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003186969s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9rrw5" [af40571a-aca7-4c18-8d67-e29063d55de8] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003513745s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-882483 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-882483 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-882483 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-882483 -n no-preload-882483
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-882483 -n no-preload-882483: exit status 2 (340.018646ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-882483 -n no-preload-882483
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-882483 -n no-preload-882483: exit status 2 (352.620766ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-882483 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-882483 -n no-preload-882483
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-882483 -n no-preload-882483
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-588348 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-588348 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m24.86083835s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vh2gh" [683c371d-299a-4afe-bf1c-b6006b0d2784] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005377048s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vh2gh" [683c371d-299a-4afe-bf1c-b6006b0d2784] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003275469s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-121127 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-121127 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-121127 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-121127 -n embed-certs-121127
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-121127 -n embed-certs-121127: exit status 2 (431.93674ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-121127 -n embed-certs-121127
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-121127 -n embed-certs-121127: exit status 2 (450.743956ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-121127 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-121127 -n embed-certs-121127
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-121127 -n embed-certs-121127
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-701288 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1120 21:13:54.397159    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:13:54.403501    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:13:54.414849    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:13:54.436196    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:13:54.477598    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:13:54.558958    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:13:54.720334    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:13:55.042028    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:13:55.687969    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:13:56.969984    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:13:58.759843    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/addons-657501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:13:59.532038    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:14:04.653964    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:14:14.895306    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-701288 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (42.942805695s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-701288 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-701288 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.049442622s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-701288 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-701288 --alsologtostderr -v=3: (1.357899924s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-701288 -n newest-cni-701288
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-701288 -n newest-cni-701288: exit status 7 (74.846928ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-701288 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-701288 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1120 21:14:35.377092    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-701288 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (17.074831673s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-701288 -n newest-cni-701288
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-701288 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-701288 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-701288 -n newest-cni-701288
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-701288 -n newest-cni-701288: exit status 2 (355.442045ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-701288 -n newest-cni-701288
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-701288 -n newest-cni-701288: exit status 2 (333.959239ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-701288 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-701288 -n newest-cni-701288
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-701288 -n newest-cni-701288
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (81.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-448616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-448616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m21.519117787s)
--- PASS: TestNetworkPlugins/group/auto/Start (81.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-588348 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-588348 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.327295603s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-588348 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-588348 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-588348 --alsologtostderr -v=3: (12.614156263s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-588348 -n default-k8s-diff-port-588348
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-588348 -n default-k8s-diff-port-588348: exit status 7 (156.719713ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-588348 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-588348 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1120 21:15:16.338398    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:15:48.702764    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/functional-365934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-588348 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (53.464704986s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-588348 -n default-k8s-diff-port-588348
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-d5gdn" [c3180f6c-371f-4096-9353-7c22e25e47ba] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003351548s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-448616 "pgrep -a kubelet"
I1120 21:16:12.389376    4089 config.go:182] Loaded profile config "auto-448616": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-448616 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-447xf" [51606fb7-4b29-4a97-825d-3bdb054395bd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-447xf" [51606fb7-4b29-4a97-825d-3bdb054395bd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.010109263s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-d5gdn" [c3180f6c-371f-4096-9353-7c22e25e47ba] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00364141s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-588348 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-588348 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-588348 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-588348 -n default-k8s-diff-port-588348
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-588348 -n default-k8s-diff-port-588348: exit status 2 (407.911549ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-588348 -n default-k8s-diff-port-588348
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-588348 -n default-k8s-diff-port-588348: exit status 2 (437.288693ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-588348 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-588348 -n default-k8s-diff-port-588348
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-588348 -n default-k8s-diff-port-588348
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.38s)
E1120 21:21:53.629870    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/auto-448616/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-448616 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-448616 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-448616 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (92.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-448616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E1120 21:16:38.261352    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:16:43.435035    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:16:43.442126    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:16:43.456499    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:16:43.478629    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:16:43.519952    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:16:43.601274    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:16:43.762677    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:16:44.084744    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-448616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m32.235073209s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (92.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (67.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-448616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E1120 21:16:48.575198    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:16:53.696725    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:17:03.938012    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:17:24.419280    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-448616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m7.180794879s)
--- PASS: TestNetworkPlugins/group/calico/Start (67.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-fk4z8" [69027b7c-227c-40be-81fa-55c81f700138] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-fk4z8" [69027b7c-227c-40be-81fa-55c81f700138] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.010306719s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-t27jb" [7143cc6c-30b2-430d-8067-000192fdd5ee] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003846605s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-448616 "pgrep -a kubelet"
I1120 21:18:00.745390    4089 config.go:182] Loaded profile config "calico-448616": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-448616 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-98lvs" [bc4a1495-cefe-4abf-b75c-1e53fbc93f1b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-98lvs" [bc4a1495-cefe-4abf-b75c-1e53fbc93f1b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.007573628s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-448616 "pgrep -a kubelet"
E1120 21:18:05.382023    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1120 21:18:05.632780    4089 config.go:182] Loaded profile config "kindnet-448616": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-448616 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ff2hq" [e78009cf-d20a-4991-a38c-42a04041780c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ff2hq" [e78009cf-d20a-4991-a38c-42a04041780c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003482645s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-448616 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-448616 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-448616 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-448616 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-448616 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-448616 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (61.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-448616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-448616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m1.872446258s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (61.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (81.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-448616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1120 21:18:54.396834    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:18:58.759085    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/addons-657501/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:19:22.103651    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/old-k8s-version-023521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:19:27.303576    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/no-preload-882483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-448616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m21.138429252s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (81.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-448616 "pgrep -a kubelet"
I1120 21:19:39.911730    4089 config.go:182] Loaded profile config "custom-flannel-448616": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-448616 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-b952h" [1f257565-5216-4f2d-a6e4-25a7b3ce4923] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-b952h" [1f257565-5216-4f2d-a6e4-25a7b3ce4923] Running
E1120 21:19:44.740530    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/default-k8s-diff-port-588348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:19:44.746896    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/default-k8s-diff-port-588348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:19:44.758361    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/default-k8s-diff-port-588348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:19:44.779847    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/default-k8s-diff-port-588348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:19:44.821259    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/default-k8s-diff-port-588348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:19:44.902822    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/default-k8s-diff-port-588348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:19:45.064839    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/default-k8s-diff-port-588348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:19:45.386665    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/default-k8s-diff-port-588348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:19:46.028750    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/default-k8s-diff-port-588348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:19:47.311455    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/default-k8s-diff-port-588348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004219515s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-448616 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-448616 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-448616 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-448616 "pgrep -a kubelet"
I1120 21:20:07.316624    4089 config.go:182] Loaded profile config "enable-default-cni-448616": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-448616 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pc77z" [f84ad056-6494-4507-8859-48355d2f7d36] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pc77z" [f84ad056-6494-4507-8859-48355d2f7d36] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.00453386s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-448616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-448616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m0.141754737s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-448616 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-448616 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-448616 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (78.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-448616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1120 21:20:48.702600    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/functional-365934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:21:06.679936    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/default-k8s-diff-port-588348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:21:12.654072    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/auto-448616/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:21:12.660448    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/auto-448616/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:21:12.671822    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/auto-448616/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:21:12.693191    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/auto-448616/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:21:12.734527    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/auto-448616/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:21:12.815795    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/auto-448616/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:21:12.977221    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/auto-448616/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-448616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m18.132347744s)
--- PASS: TestNetworkPlugins/group/bridge/Start (78.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-xmvdf" [a64683c5-84f3-4dd6-89dc-76d7ff7b57c2] Running
E1120 21:21:13.299125    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/auto-448616/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:21:13.941353    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/auto-448616/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:21:15.223166    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/auto-448616/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:21:17.784920    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/auto-448616/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004418199s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-448616 "pgrep -a kubelet"
I1120 21:21:19.447856    4089 config.go:182] Loaded profile config "flannel-448616": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-448616 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-w5k2r" [a43d80d5-9225-4fb9-8a7f-0c18e88f636f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1120 21:21:22.906505    4089 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/auto-448616/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-w5k2r" [a43d80d5-9225-4fb9-8a7f-0c18e88f636f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003424095s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-448616 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-448616 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-448616 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-448616 "pgrep -a kubelet"
I1120 21:22:02.285561    4089 config.go:182] Loaded profile config "bridge-448616": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-448616 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2td7n" [e88dc3aa-137e-4f3e-9b11-bc1747f0e13f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2td7n" [e88dc3aa-137e-4f3e-9b11-bc1747f0e13f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.016484129s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-448616 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-448616 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-448616 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (30/333)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-477109 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-477109" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-477109
--- SKIP: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-839927" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-839927
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-448616 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-448616

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-448616

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-448616

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-448616

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-448616

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-448616

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-448616

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-448616

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-448616

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-448616

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-448616

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-448616" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-448616" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21923-2300/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 20 Nov 2025 21:06:12 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-982573
contexts:
- context:
cluster: kubernetes-upgrade-982573
extensions:
- extension:
last-update: Thu, 20 Nov 2025 21:06:12 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-982573
name: kubernetes-upgrade-982573
current-context: kubernetes-upgrade-982573
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-982573
user:
client-certificate: /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/kubernetes-upgrade-982573/client.crt
client-key: /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/kubernetes-upgrade-982573/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-448616

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-448616"

                                                
                                                
----------------------- debugLogs end: kubenet-448616 [took: 5.35153508s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-448616" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-448616
--- SKIP: TestNetworkPlugins/group/kubenet (5.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-448616 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-448616

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-448616

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-448616

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-448616

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-448616

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-448616

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-448616

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-448616

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-448616

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-448616

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-448616

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-448616" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-448616

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-448616

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-448616

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-448616

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-448616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-448616" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21923-2300/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 20 Nov 2025 21:06:12 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-982573
contexts:
- context:
cluster: kubernetes-upgrade-982573
extensions:
- extension:
last-update: Thu, 20 Nov 2025 21:06:12 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-982573
name: kubernetes-upgrade-982573
current-context: kubernetes-upgrade-982573
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-982573
user:
client-certificate: /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/kubernetes-upgrade-982573/client.crt
client-key: /home/jenkins/minikube-integration/21923-2300/.minikube/profiles/kubernetes-upgrade-982573/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-448616

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-448616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-448616"

                                                
                                                
----------------------- debugLogs end: cilium-448616 [took: 5.279752175s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-448616" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-448616
--- SKIP: TestNetworkPlugins/group/cilium (5.51s)

                                                
                                    
Copied to clipboard