Test Report: Docker_Linux_containerd_arm64 21847

                    
                      fa4d670f7aa2bf54fac775fb3c292483f6687320:2025-11-21:42430
                    
                

Test fail (4/333)

Order failed test Duration
301 TestStartStop/group/old-k8s-version/serial/DeployApp 13.6
314 TestStartStop/group/no-preload/serial/DeployApp 14.76
317 TestStartStop/group/embed-certs/serial/DeployApp 15.17
346 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 14.97
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (13.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-092258 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4fd396a4-7f86-4bac-b99a-f7427bb5deb9] Pending
helpers_test.go:352: "busybox" [4fd396a4-7f86-4bac-b99a-f7427bb5deb9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4fd396a4-7f86-4bac-b99a-f7427bb5deb9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004230521s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-092258 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-092258
helpers_test.go:243: (dbg) docker inspect old-k8s-version-092258:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "06d5dd86afe167e6eb08ea3044c0d6356df71a87f2b7acc2267f870459c4f2cd",
	        "Created": "2025-11-21T14:43:42.553015288Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2835562,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:43:42.615467666Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/06d5dd86afe167e6eb08ea3044c0d6356df71a87f2b7acc2267f870459c4f2cd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/06d5dd86afe167e6eb08ea3044c0d6356df71a87f2b7acc2267f870459c4f2cd/hostname",
	        "HostsPath": "/var/lib/docker/containers/06d5dd86afe167e6eb08ea3044c0d6356df71a87f2b7acc2267f870459c4f2cd/hosts",
	        "LogPath": "/var/lib/docker/containers/06d5dd86afe167e6eb08ea3044c0d6356df71a87f2b7acc2267f870459c4f2cd/06d5dd86afe167e6eb08ea3044c0d6356df71a87f2b7acc2267f870459c4f2cd-json.log",
	        "Name": "/old-k8s-version-092258",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-092258:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-092258",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "06d5dd86afe167e6eb08ea3044c0d6356df71a87f2b7acc2267f870459c4f2cd",
	                "LowerDir": "/var/lib/docker/overlay2/8452e0c048f2d0756f64c494882e5db8b7ecd5ac7b4b99aa190200898d89fa81-init/diff:/var/lib/docker/overlay2/789a4b9f9866e585907664b1eaf98d94438dbf699e0511f3ca5ba5ea682b005e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8452e0c048f2d0756f64c494882e5db8b7ecd5ac7b4b99aa190200898d89fa81/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8452e0c048f2d0756f64c494882e5db8b7ecd5ac7b4b99aa190200898d89fa81/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8452e0c048f2d0756f64c494882e5db8b7ecd5ac7b4b99aa190200898d89fa81/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-092258",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-092258/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-092258",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-092258",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-092258",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "858034611e5cf97ef820625d4dcf77e9b3d1510529f8fc62d29cb6c8391e9b31",
	            "SandboxKey": "/var/run/docker/netns/858034611e5c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36720"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36721"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36724"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36722"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36723"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-092258": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:d3:7f:90:c5:6d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "02cac79c841e49103d05ede51175e2b52d6dd809de4e55337963bb73586b9563",
	                    "EndpointID": "d33cdf4d1c16a9f5945ab24f1a11153f9cd4665673d1a383064fc3d56825842f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-092258",
	                        "06d5dd86afe1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-092258 -n old-k8s-version-092258
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-092258 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-092258 logs -n 25: (1.187170709s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ ssh     │ -p cilium-650772 sudo cat /etc/docker/daemon.json                                                                                                                                                                                                   │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo docker system info                                                                                                                                                                                                            │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                           │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo containerd config dump                                                                                                                                                                                                        │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo crio config                                                                                                                                                                                                                   │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ delete  │ -p cilium-650772                                                                                                                                                                                                                                    │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ start   │ -p cert-expiration-184410 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-184410   │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ force-systemd-env-041746 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-041746 │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ delete  │ -p force-systemd-env-041746                                                                                                                                                                                                                         │ force-systemd-env-041746 │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:43 UTC │
	│ start   │ -p cert-options-035007 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-035007      │ jenkins │ v1.37.0 │ 21 Nov 25 14:43 UTC │ 21 Nov 25 14:43 UTC │
	│ ssh     │ cert-options-035007 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-035007      │ jenkins │ v1.37.0 │ 21 Nov 25 14:43 UTC │ 21 Nov 25 14:43 UTC │
	│ ssh     │ -p cert-options-035007 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-035007      │ jenkins │ v1.37.0 │ 21 Nov 25 14:43 UTC │ 21 Nov 25 14:43 UTC │
	│ delete  │ -p cert-options-035007                                                                                                                                                                                                                              │ cert-options-035007      │ jenkins │ v1.37.0 │ 21 Nov 25 14:43 UTC │ 21 Nov 25 14:43 UTC │
	│ start   │ -p old-k8s-version-092258 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:43 UTC │ 21 Nov 25 14:44 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:43:36
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:43:36.383330 2835167 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:43:36.383513 2835167 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:43:36.383524 2835167 out.go:374] Setting ErrFile to fd 2...
	I1121 14:43:36.383530 2835167 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:43:36.383828 2835167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-2633933/.minikube/bin
	I1121 14:43:36.384345 2835167 out.go:368] Setting JSON to false
	I1121 14:43:36.385436 2835167 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":69965,"bootTime":1763666252,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1121 14:43:36.385513 2835167 start.go:143] virtualization:  
	I1121 14:43:36.390376 2835167 out.go:179] * [old-k8s-version-092258] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 14:43:36.394191 2835167 notify.go:221] Checking for updates...
	I1121 14:43:36.397425 2835167 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:43:36.400714 2835167 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:43:36.403761 2835167 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-2633933/kubeconfig
	I1121 14:43:36.406876 2835167 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-2633933/.minikube
	I1121 14:43:36.409847 2835167 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 14:43:36.413120 2835167 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:43:36.416775 2835167 config.go:182] Loaded profile config "cert-expiration-184410": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:43:36.416939 2835167 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:43:36.450855 2835167 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 14:43:36.450991 2835167 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:43:36.516624 2835167 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-21 14:43:36.506387596 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:43:36.516732 2835167 docker.go:319] overlay module found
	I1121 14:43:36.519985 2835167 out.go:179] * Using the docker driver based on user configuration
	I1121 14:43:36.522932 2835167 start.go:309] selected driver: docker
	I1121 14:43:36.522956 2835167 start.go:930] validating driver "docker" against <nil>
	I1121 14:43:36.522972 2835167 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:43:36.523794 2835167 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:43:36.580334 2835167 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-21 14:43:36.571381735 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:43:36.580506 2835167 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 14:43:36.580737 2835167 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:43:36.583854 2835167 out.go:179] * Using Docker driver with root privileges
	I1121 14:43:36.586764 2835167 cni.go:84] Creating CNI manager for ""
	I1121 14:43:36.586838 2835167 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:43:36.586852 2835167 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 14:43:36.586941 2835167 start.go:353] cluster config:
	{Name:old-k8s-version-092258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-092258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:43:36.590049 2835167 out.go:179] * Starting "old-k8s-version-092258" primary control-plane node in "old-k8s-version-092258" cluster
	I1121 14:43:36.592891 2835167 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1121 14:43:36.595918 2835167 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:43:36.598784 2835167 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1121 14:43:36.598825 2835167 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:43:36.598850 2835167 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1121 14:43:36.598866 2835167 cache.go:65] Caching tarball of preloaded images
	I1121 14:43:36.598958 2835167 preload.go:238] Found /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1121 14:43:36.598968 2835167 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1121 14:43:36.599136 2835167 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/config.json ...
	I1121 14:43:36.599165 2835167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/config.json: {Name:mk03fe35747f6c73b79e2daee9ca9c7b13210439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:43:36.618070 2835167 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:43:36.618098 2835167 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:43:36.618116 2835167 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:43:36.618138 2835167 start.go:360] acquireMachinesLock for old-k8s-version-092258: {Name:mkf21290144e8164ceda2548005b3a6e3ed2df4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:43:36.618251 2835167 start.go:364] duration metric: took 91.969µs to acquireMachinesLock for "old-k8s-version-092258"
	I1121 14:43:36.618280 2835167 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-092258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-092258 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:43:36.618362 2835167 start.go:125] createHost starting for "" (driver="docker")
	I1121 14:43:36.621674 2835167 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1121 14:43:36.621900 2835167 start.go:159] libmachine.API.Create for "old-k8s-version-092258" (driver="docker")
	I1121 14:43:36.621945 2835167 client.go:173] LocalClient.Create starting
	I1121 14:43:36.622015 2835167 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem
	I1121 14:43:36.622055 2835167 main.go:143] libmachine: Decoding PEM data...
	I1121 14:43:36.622071 2835167 main.go:143] libmachine: Parsing certificate...
	I1121 14:43:36.622122 2835167 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem
	I1121 14:43:36.622144 2835167 main.go:143] libmachine: Decoding PEM data...
	I1121 14:43:36.622155 2835167 main.go:143] libmachine: Parsing certificate...
	I1121 14:43:36.622504 2835167 cli_runner.go:164] Run: docker network inspect old-k8s-version-092258 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 14:43:36.638581 2835167 cli_runner.go:211] docker network inspect old-k8s-version-092258 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 14:43:36.638673 2835167 network_create.go:284] running [docker network inspect old-k8s-version-092258] to gather additional debugging logs...
	I1121 14:43:36.638695 2835167 cli_runner.go:164] Run: docker network inspect old-k8s-version-092258
	W1121 14:43:36.654171 2835167 cli_runner.go:211] docker network inspect old-k8s-version-092258 returned with exit code 1
	I1121 14:43:36.654200 2835167 network_create.go:287] error running [docker network inspect old-k8s-version-092258]: docker network inspect old-k8s-version-092258: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-092258 not found
	I1121 14:43:36.654221 2835167 network_create.go:289] output of [docker network inspect old-k8s-version-092258]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-092258 not found
	
	** /stderr **
	I1121 14:43:36.654336 2835167 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:43:36.670217 2835167 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c13a3bee40ff IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:9f:8e:c6:2a:d6} reservation:<nil>}
	I1121 14:43:36.670512 2835167 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1859e8fd5584 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:c6:00:f6:5b:96} reservation:<nil>}
	I1121 14:43:36.670770 2835167 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-44a9b6062c4d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:b5:31:a5:3d:f0} reservation:<nil>}
	I1121 14:43:36.671175 2835167 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a1a410}
	I1121 14:43:36.671200 2835167 network_create.go:124] attempt to create docker network old-k8s-version-092258 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1121 14:43:36.671260 2835167 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-092258 old-k8s-version-092258
	I1121 14:43:36.731269 2835167 network_create.go:108] docker network old-k8s-version-092258 192.168.76.0/24 created
	I1121 14:43:36.731302 2835167 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-092258" container
	I1121 14:43:36.731379 2835167 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 14:43:36.747231 2835167 cli_runner.go:164] Run: docker volume create old-k8s-version-092258 --label name.minikube.sigs.k8s.io=old-k8s-version-092258 --label created_by.minikube.sigs.k8s.io=true
	I1121 14:43:36.766444 2835167 oci.go:103] Successfully created a docker volume old-k8s-version-092258
	I1121 14:43:36.766529 2835167 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-092258-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-092258 --entrypoint /usr/bin/test -v old-k8s-version-092258:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1121 14:43:37.323084 2835167 oci.go:107] Successfully prepared a docker volume old-k8s-version-092258
	I1121 14:43:37.323160 2835167 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1121 14:43:37.323176 2835167 kic.go:194] Starting extracting preloaded images to volume ...
	I1121 14:43:37.323249 2835167 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-092258:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1121 14:43:42.479471 2835167 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-092258:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (5.156180728s)
	I1121 14:43:42.479504 2835167 kic.go:203] duration metric: took 5.156324945s to extract preloaded images to volume ...
	W1121 14:43:42.479641 2835167 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1121 14:43:42.479761 2835167 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 14:43:42.537202 2835167 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-092258 --name old-k8s-version-092258 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-092258 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-092258 --network old-k8s-version-092258 --ip 192.168.76.2 --volume old-k8s-version-092258:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1121 14:43:42.837167 2835167 cli_runner.go:164] Run: docker container inspect old-k8s-version-092258 --format={{.State.Running}}
	I1121 14:43:42.855671 2835167 cli_runner.go:164] Run: docker container inspect old-k8s-version-092258 --format={{.State.Status}}
	I1121 14:43:42.878990 2835167 cli_runner.go:164] Run: docker exec old-k8s-version-092258 stat /var/lib/dpkg/alternatives/iptables
	I1121 14:43:42.930669 2835167 oci.go:144] the created container "old-k8s-version-092258" has a running status.
	I1121 14:43:42.930708 2835167 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/old-k8s-version-092258/id_rsa...
	I1121 14:43:43.641994 2835167 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/old-k8s-version-092258/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 14:43:43.662600 2835167 cli_runner.go:164] Run: docker container inspect old-k8s-version-092258 --format={{.State.Status}}
	I1121 14:43:43.680831 2835167 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 14:43:43.680859 2835167 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-092258 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 14:43:43.723054 2835167 cli_runner.go:164] Run: docker container inspect old-k8s-version-092258 --format={{.State.Status}}
	I1121 14:43:43.743125 2835167 machine.go:94] provisionDockerMachine start ...
	I1121 14:43:43.743225 2835167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-092258
	I1121 14:43:43.760910 2835167 main.go:143] libmachine: Using SSH client type: native
	I1121 14:43:43.761356 2835167 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 36720 <nil> <nil>}
	I1121 14:43:43.761377 2835167 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:43:43.762044 2835167 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1121 14:43:46.904947 2835167 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-092258
	
	I1121 14:43:46.904969 2835167 ubuntu.go:182] provisioning hostname "old-k8s-version-092258"
	I1121 14:43:46.905064 2835167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-092258
	I1121 14:43:46.922185 2835167 main.go:143] libmachine: Using SSH client type: native
	I1121 14:43:46.922512 2835167 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 36720 <nil> <nil>}
	I1121 14:43:46.922533 2835167 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-092258 && echo "old-k8s-version-092258" | sudo tee /etc/hostname
	I1121 14:43:47.078378 2835167 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-092258
	
	I1121 14:43:47.078461 2835167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-092258
	I1121 14:43:47.102253 2835167 main.go:143] libmachine: Using SSH client type: native
	I1121 14:43:47.102562 2835167 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 36720 <nil> <nil>}
	I1121 14:43:47.102584 2835167 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-092258' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-092258/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-092258' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:43:47.245266 2835167 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:43:47.245351 2835167 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-2633933/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-2633933/.minikube}
	I1121 14:43:47.245378 2835167 ubuntu.go:190] setting up certificates
	I1121 14:43:47.245387 2835167 provision.go:84] configureAuth start
	I1121 14:43:47.245451 2835167 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-092258
	I1121 14:43:47.263540 2835167 provision.go:143] copyHostCerts
	I1121 14:43:47.263612 2835167 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.pem, removing ...
	I1121 14:43:47.263626 2835167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.pem
	I1121 14:43:47.263706 2835167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.pem (1082 bytes)
	I1121 14:43:47.263811 2835167 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-2633933/.minikube/cert.pem, removing ...
	I1121 14:43:47.263822 2835167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-2633933/.minikube/cert.pem
	I1121 14:43:47.263853 2835167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-2633933/.minikube/cert.pem (1123 bytes)
	I1121 14:43:47.263922 2835167 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-2633933/.minikube/key.pem, removing ...
	I1121 14:43:47.263932 2835167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-2633933/.minikube/key.pem
	I1121 14:43:47.263960 2835167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-2633933/.minikube/key.pem (1679 bytes)
	I1121 14:43:47.264022 2835167 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-092258 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-092258]
	I1121 14:43:48.319202 2835167 provision.go:177] copyRemoteCerts
	I1121 14:43:48.319298 2835167 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:43:48.319410 2835167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-092258
	I1121 14:43:48.336010 2835167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36720 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/old-k8s-version-092258/id_rsa Username:docker}
	I1121 14:43:48.436628 2835167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1121 14:43:48.454108 2835167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1121 14:43:48.472424 2835167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 14:43:48.490030 2835167 provision.go:87] duration metric: took 1.244618966s to configureAuth
	I1121 14:43:48.490068 2835167 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:43:48.490248 2835167 config.go:182] Loaded profile config "old-k8s-version-092258": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1121 14:43:48.490256 2835167 machine.go:97] duration metric: took 4.747108297s to provisionDockerMachine
	I1121 14:43:48.490263 2835167 client.go:176] duration metric: took 11.868306871s to LocalClient.Create
	I1121 14:43:48.490277 2835167 start.go:167] duration metric: took 11.868378746s to libmachine.API.Create "old-k8s-version-092258"
	I1121 14:43:48.490284 2835167 start.go:293] postStartSetup for "old-k8s-version-092258" (driver="docker")
	I1121 14:43:48.490298 2835167 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:43:48.490349 2835167 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:43:48.490386 2835167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-092258
	I1121 14:43:48.506758 2835167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36720 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/old-k8s-version-092258/id_rsa Username:docker}
	I1121 14:43:48.608899 2835167 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:43:48.612079 2835167 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:43:48.612112 2835167 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:43:48.612141 2835167 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-2633933/.minikube/addons for local assets ...
	I1121 14:43:48.612212 2835167 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-2633933/.minikube/files for local assets ...
	I1121 14:43:48.612293 2835167 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem -> 26357852.pem in /etc/ssl/certs
	I1121 14:43:48.612406 2835167 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:43:48.619568 2835167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem --> /etc/ssl/certs/26357852.pem (1708 bytes)
	I1121 14:43:48.636807 2835167 start.go:296] duration metric: took 146.508249ms for postStartSetup
	I1121 14:43:48.637286 2835167 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-092258
	I1121 14:43:48.653983 2835167 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/config.json ...
	I1121 14:43:48.654267 2835167 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:43:48.654326 2835167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-092258
	I1121 14:43:48.671301 2835167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36720 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/old-k8s-version-092258/id_rsa Username:docker}
	I1121 14:43:48.770419 2835167 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:43:48.775323 2835167 start.go:128] duration metric: took 12.156945443s to createHost
	I1121 14:43:48.775348 2835167 start.go:83] releasing machines lock for "old-k8s-version-092258", held for 12.157085763s
	I1121 14:43:48.775420 2835167 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-092258
	I1121 14:43:48.792336 2835167 ssh_runner.go:195] Run: cat /version.json
	I1121 14:43:48.792390 2835167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-092258
	I1121 14:43:48.792660 2835167 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:43:48.792747 2835167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-092258
	I1121 14:43:48.822571 2835167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36720 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/old-k8s-version-092258/id_rsa Username:docker}
	I1121 14:43:48.823165 2835167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36720 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/old-k8s-version-092258/id_rsa Username:docker}
	I1121 14:43:49.010028 2835167 ssh_runner.go:195] Run: systemctl --version
	I1121 14:43:49.017657 2835167 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:43:49.023491 2835167 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:43:49.023609 2835167 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:43:49.052303 2835167 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1121 14:43:49.052375 2835167 start.go:496] detecting cgroup driver to use...
	I1121 14:43:49.052424 2835167 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 14:43:49.052491 2835167 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1121 14:43:49.069013 2835167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1121 14:43:49.083455 2835167 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:43:49.083552 2835167 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:43:49.106526 2835167 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:43:49.129811 2835167 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:43:49.256720 2835167 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:43:49.389600 2835167 docker.go:234] disabling docker service ...
	I1121 14:43:49.389683 2835167 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:43:49.410151 2835167 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:43:49.423211 2835167 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:43:49.545870 2835167 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:43:49.674290 2835167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:43:49.688105 2835167 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:43:49.704461 2835167 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1121 14:43:49.715375 2835167 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1121 14:43:49.725179 2835167 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1121 14:43:49.725253 2835167 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1121 14:43:49.734991 2835167 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:43:49.745235 2835167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1121 14:43:49.754840 2835167 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:43:49.764086 2835167 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:43:49.773152 2835167 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1121 14:43:49.781841 2835167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1121 14:43:49.791026 2835167 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1121 14:43:49.800527 2835167 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:43:49.808398 2835167 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:43:49.815956 2835167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:43:49.931886 2835167 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1121 14:43:50.066706 2835167 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1121 14:43:50.066831 2835167 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1121 14:43:50.071006 2835167 start.go:564] Will wait 60s for crictl version
	I1121 14:43:50.071125 2835167 ssh_runner.go:195] Run: which crictl
	I1121 14:43:50.075393 2835167 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:43:50.118651 2835167 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1121 14:43:50.118773 2835167 ssh_runner.go:195] Run: containerd --version
	I1121 14:43:50.141212 2835167 ssh_runner.go:195] Run: containerd --version
	I1121 14:43:50.169638 2835167 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1121 14:43:50.172499 2835167 cli_runner.go:164] Run: docker network inspect old-k8s-version-092258 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:43:50.189507 2835167 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1121 14:43:50.198370 2835167 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:43:50.208962 2835167 kubeadm.go:884] updating cluster {Name:old-k8s-version-092258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-092258 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:43:50.209203 2835167 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1121 14:43:50.209272 2835167 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:43:50.234384 2835167 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:43:50.234409 2835167 containerd.go:534] Images already preloaded, skipping extraction
	I1121 14:43:50.234475 2835167 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:43:50.259395 2835167 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:43:50.259421 2835167 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:43:50.259430 2835167 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 containerd true true} ...
	I1121 14:43:50.259536 2835167 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-092258 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-092258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:43:50.259609 2835167 ssh_runner.go:195] Run: sudo crictl info
	I1121 14:43:50.287070 2835167 cni.go:84] Creating CNI manager for ""
	I1121 14:43:50.287095 2835167 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:43:50.287115 2835167 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:43:50.287139 2835167 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-092258 NodeName:old-k8s-version-092258 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:43:50.287271 2835167 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-092258"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:43:50.287342 2835167 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1121 14:43:50.295386 2835167 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:43:50.295454 2835167 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:43:50.303213 2835167 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1121 14:43:50.317127 2835167 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:43:50.331240 2835167 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I1121 14:43:50.344296 2835167 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:43:50.347919 2835167 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:43:50.357793 2835167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:43:50.483017 2835167 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:43:50.499630 2835167 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258 for IP: 192.168.76.2
	I1121 14:43:50.499697 2835167 certs.go:195] generating shared ca certs ...
	I1121 14:43:50.499729 2835167 certs.go:227] acquiring lock for ca certs: {Name:mk0a1b8efa9f1d453751b4f7afafeea16d7243a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:43:50.499912 2835167 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.key
	I1121 14:43:50.499982 2835167 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/proxy-client-ca.key
	I1121 14:43:50.500020 2835167 certs.go:257] generating profile certs ...
	I1121 14:43:50.500125 2835167 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/client.key
	I1121 14:43:50.500157 2835167 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/client.crt with IP's: []
	I1121 14:43:50.881389 2835167 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/client.crt ...
	I1121 14:43:50.881423 2835167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/client.crt: {Name:mkd66b37bd8f68df88ee391b1c0ae406d24100dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:43:50.881622 2835167 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/client.key ...
	I1121 14:43:50.881638 2835167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/client.key: {Name:mk87497e50632ba54cdc705e25ae82f0b49d923a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:43:50.881733 2835167 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/apiserver.key.fe0fc8ce
	I1121 14:43:50.881751 2835167 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/apiserver.crt.fe0fc8ce with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1121 14:43:51.368107 2835167 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/apiserver.crt.fe0fc8ce ...
	I1121 14:43:51.368141 2835167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/apiserver.crt.fe0fc8ce: {Name:mke3412122bd471676c09fe30765bbb879486748 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:43:51.368348 2835167 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/apiserver.key.fe0fc8ce ...
	I1121 14:43:51.368363 2835167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/apiserver.key.fe0fc8ce: {Name:mkb54d05c24cffdedc4d0fc59e5780f32a7a4815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:43:51.368463 2835167 certs.go:382] copying /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/apiserver.crt.fe0fc8ce -> /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/apiserver.crt
	I1121 14:43:51.368560 2835167 certs.go:386] copying /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/apiserver.key.fe0fc8ce -> /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/apiserver.key
	I1121 14:43:51.368628 2835167 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/proxy-client.key
	I1121 14:43:51.368647 2835167 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/proxy-client.crt with IP's: []
	I1121 14:43:51.447326 2835167 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/proxy-client.crt ...
	I1121 14:43:51.447360 2835167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/proxy-client.crt: {Name:mk8fd112c818af834b5d68c83f8c92f6291ef45d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:43:51.447577 2835167 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/proxy-client.key ...
	I1121 14:43:51.447598 2835167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/proxy-client.key: {Name:mk461afd810cf943501ef59a65730b33eecea0e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:43:51.447801 2835167 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/2635785.pem (1338 bytes)
	W1121 14:43:51.447843 2835167 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/2635785_empty.pem, impossibly tiny 0 bytes
	I1121 14:43:51.447858 2835167 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:43:51.447889 2835167 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem (1082 bytes)
	I1121 14:43:51.447920 2835167 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:43:51.447947 2835167 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/key.pem (1679 bytes)
	I1121 14:43:51.447993 2835167 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem (1708 bytes)
	I1121 14:43:51.448566 2835167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:43:51.466854 2835167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 14:43:51.484620 2835167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:43:51.502526 2835167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:43:51.521430 2835167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1121 14:43:51.540179 2835167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 14:43:51.560949 2835167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:43:51.579376 2835167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1121 14:43:51.598197 2835167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:43:51.615934 2835167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/2635785.pem --> /usr/share/ca-certificates/2635785.pem (1338 bytes)
	I1121 14:43:51.634213 2835167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem --> /usr/share/ca-certificates/26357852.pem (1708 bytes)
	I1121 14:43:51.652095 2835167 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:43:51.664901 2835167 ssh_runner.go:195] Run: openssl version
	I1121 14:43:51.671463 2835167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:43:51.680109 2835167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:43:51.684127 2835167 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:43:51.684210 2835167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:43:51.727682 2835167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:43:51.736178 2835167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2635785.pem && ln -fs /usr/share/ca-certificates/2635785.pem /etc/ssl/certs/2635785.pem"
	I1121 14:43:51.744471 2835167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2635785.pem
	I1121 14:43:51.748815 2835167 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:03 /usr/share/ca-certificates/2635785.pem
	I1121 14:43:51.748886 2835167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2635785.pem
	I1121 14:43:51.790035 2835167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2635785.pem /etc/ssl/certs/51391683.0"
	I1121 14:43:51.798184 2835167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26357852.pem && ln -fs /usr/share/ca-certificates/26357852.pem /etc/ssl/certs/26357852.pem"
	I1121 14:43:51.806524 2835167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26357852.pem
	I1121 14:43:51.810447 2835167 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:03 /usr/share/ca-certificates/26357852.pem
	I1121 14:43:51.810539 2835167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26357852.pem
	I1121 14:43:51.851288 2835167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/26357852.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:43:51.859608 2835167 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:43:51.863193 2835167 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:43:51.863292 2835167 kubeadm.go:401] StartCluster: {Name:old-k8s-version-092258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-092258 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:43:51.863366 2835167 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1121 14:43:51.863446 2835167 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:43:51.891859 2835167 cri.go:89] found id: ""
	I1121 14:43:51.891941 2835167 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:43:51.900527 2835167 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:43:51.909151 2835167 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:43:51.909241 2835167 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:43:51.919737 2835167 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:43:51.919760 2835167 kubeadm.go:158] found existing configuration files:
	
	I1121 14:43:51.919826 2835167 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 14:43:51.928877 2835167 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:43:51.928996 2835167 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:43:51.936697 2835167 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 14:43:51.944973 2835167 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:43:51.945066 2835167 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:43:51.952856 2835167 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 14:43:51.960547 2835167 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:43:51.960686 2835167 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:43:51.968472 2835167 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 14:43:51.976805 2835167 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:43:51.976888 2835167 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:43:51.984345 2835167 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:43:52.080198 2835167 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1121 14:43:52.183959 2835167 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:44:07.478855 2835167 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1121 14:44:07.478915 2835167 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:44:07.479009 2835167 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:44:07.479066 2835167 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1121 14:44:07.479102 2835167 kubeadm.go:319] OS: Linux
	I1121 14:44:07.479150 2835167 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:44:07.479201 2835167 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1121 14:44:07.479250 2835167 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:44:07.479300 2835167 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:44:07.479351 2835167 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:44:07.479413 2835167 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:44:07.479461 2835167 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:44:07.479511 2835167 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:44:07.479559 2835167 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1121 14:44:07.479634 2835167 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:44:07.479732 2835167 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:44:07.479828 2835167 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1121 14:44:07.479893 2835167 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:44:07.482963 2835167 out.go:252]   - Generating certificates and keys ...
	I1121 14:44:07.483064 2835167 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:44:07.483132 2835167 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:44:07.483202 2835167 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:44:07.483261 2835167 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:44:07.483324 2835167 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:44:07.483377 2835167 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:44:07.483433 2835167 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:44:07.483564 2835167 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-092258] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1121 14:44:07.483619 2835167 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:44:07.483749 2835167 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-092258] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1121 14:44:07.483818 2835167 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:44:07.483886 2835167 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:44:07.483933 2835167 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:44:07.483992 2835167 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:44:07.484045 2835167 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:44:07.484101 2835167 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:44:07.484169 2835167 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:44:07.484226 2835167 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:44:07.484312 2835167 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:44:07.484381 2835167 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:44:07.487428 2835167 out.go:252]   - Booting up control plane ...
	I1121 14:44:07.487608 2835167 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:44:07.487710 2835167 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:44:07.487786 2835167 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:44:07.487905 2835167 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:44:07.488000 2835167 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:44:07.488044 2835167 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:44:07.488215 2835167 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1121 14:44:07.488300 2835167 kubeadm.go:319] [apiclient] All control plane components are healthy after 7.502585 seconds
	I1121 14:44:07.488418 2835167 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:44:07.488571 2835167 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:44:07.488637 2835167 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:44:07.488849 2835167 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-092258 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:44:07.488911 2835167 kubeadm.go:319] [bootstrap-token] Using token: szaotk.n52uxpmszzhbby9z
	I1121 14:44:07.491820 2835167 out.go:252]   - Configuring RBAC rules ...
	I1121 14:44:07.491949 2835167 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:44:07.492037 2835167 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:44:07.492184 2835167 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:44:07.492318 2835167 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:44:07.492450 2835167 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:44:07.492566 2835167 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:44:07.492691 2835167 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:44:07.492737 2835167 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:44:07.492785 2835167 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:44:07.492789 2835167 kubeadm.go:319] 
	I1121 14:44:07.492852 2835167 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:44:07.492857 2835167 kubeadm.go:319] 
	I1121 14:44:07.492938 2835167 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:44:07.492942 2835167 kubeadm.go:319] 
	I1121 14:44:07.492968 2835167 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:44:07.493047 2835167 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:44:07.493101 2835167 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:44:07.493105 2835167 kubeadm.go:319] 
	I1121 14:44:07.493162 2835167 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:44:07.493166 2835167 kubeadm.go:319] 
	I1121 14:44:07.493217 2835167 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:44:07.493221 2835167 kubeadm.go:319] 
	I1121 14:44:07.493276 2835167 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:44:07.493355 2835167 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:44:07.493428 2835167 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:44:07.493432 2835167 kubeadm.go:319] 
	I1121 14:44:07.493521 2835167 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:44:07.493601 2835167 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:44:07.493607 2835167 kubeadm.go:319] 
	I1121 14:44:07.493695 2835167 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token szaotk.n52uxpmszzhbby9z \
	I1121 14:44:07.493804 2835167 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d756a1c258e082bbc06f965046f24233900a8e069c2a9d29a764f0b68af739ae \
	I1121 14:44:07.493826 2835167 kubeadm.go:319] 	--control-plane 
	I1121 14:44:07.493830 2835167 kubeadm.go:319] 
	I1121 14:44:07.493920 2835167 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:44:07.493924 2835167 kubeadm.go:319] 
	I1121 14:44:07.494010 2835167 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token szaotk.n52uxpmszzhbby9z \
	I1121 14:44:07.494129 2835167 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d756a1c258e082bbc06f965046f24233900a8e069c2a9d29a764f0b68af739ae 
	I1121 14:44:07.494138 2835167 cni.go:84] Creating CNI manager for ""
	I1121 14:44:07.494145 2835167 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:44:07.497166 2835167 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:44:07.500216 2835167 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:44:07.505987 2835167 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1121 14:44:07.506006 2835167 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:44:07.546131 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:44:08.532445 2835167 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:44:08.532548 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:08.532605 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-092258 minikube.k8s.io/updated_at=2025_11_21T14_44_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=old-k8s-version-092258 minikube.k8s.io/primary=true
	I1121 14:44:08.674460 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:08.674568 2835167 ops.go:34] apiserver oom_adj: -16
	I1121 14:44:09.175394 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:09.675368 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:10.174563 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:10.675545 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:11.174568 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:11.675237 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:12.175098 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:12.675136 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:13.175409 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:13.674508 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:14.175483 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:14.674955 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:15.174638 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:15.674566 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:16.174919 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:16.674946 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:17.174624 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:17.675110 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:18.174609 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:18.674810 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:19.174819 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:19.675503 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:19.793386 2835167 kubeadm.go:1114] duration metric: took 11.260904779s to wait for elevateKubeSystemPrivileges
	I1121 14:44:19.793428 2835167 kubeadm.go:403] duration metric: took 27.930140359s to StartCluster
	I1121 14:44:19.793447 2835167 settings.go:142] acquiring lock: {Name:mkd6064915932eca5a3b1d70feb4ec8240f340da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:44:19.793514 2835167 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-2633933/kubeconfig
	I1121 14:44:19.794554 2835167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/kubeconfig: {Name:mkd905aaf74d26e32c0b3e46a7edfbf13f4b98ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:44:19.794781 2835167 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:44:19.794907 2835167 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:44:19.795160 2835167 config.go:182] Loaded profile config "old-k8s-version-092258": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1121 14:44:19.795206 2835167 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:44:19.795273 2835167 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-092258"
	I1121 14:44:19.795287 2835167 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-092258"
	I1121 14:44:19.795308 2835167 host.go:66] Checking if "old-k8s-version-092258" exists ...
	I1121 14:44:19.795815 2835167 cli_runner.go:164] Run: docker container inspect old-k8s-version-092258 --format={{.State.Status}}
	I1121 14:44:19.795979 2835167 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-092258"
	I1121 14:44:19.795995 2835167 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-092258"
	I1121 14:44:19.796255 2835167 cli_runner.go:164] Run: docker container inspect old-k8s-version-092258 --format={{.State.Status}}
	I1121 14:44:19.798851 2835167 out.go:179] * Verifying Kubernetes components...
	I1121 14:44:19.806315 2835167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:44:19.841770 2835167 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-092258"
	I1121 14:44:19.841810 2835167 host.go:66] Checking if "old-k8s-version-092258" exists ...
	I1121 14:44:19.842215 2835167 cli_runner.go:164] Run: docker container inspect old-k8s-version-092258 --format={{.State.Status}}
	I1121 14:44:19.843021 2835167 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:44:19.845981 2835167 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:44:19.846003 2835167 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:44:19.846070 2835167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-092258
	I1121 14:44:19.880288 2835167 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:44:19.880310 2835167 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:44:19.880371 2835167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-092258
	I1121 14:44:19.888389 2835167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36720 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/old-k8s-version-092258/id_rsa Username:docker}
	I1121 14:44:19.916676 2835167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36720 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/old-k8s-version-092258/id_rsa Username:docker}
	I1121 14:44:20.244082 2835167 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:44:20.285170 2835167 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:44:20.285362 2835167 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:44:20.332070 2835167 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:44:21.059715 2835167 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-092258" to be "Ready" ...
	I1121 14:44:21.059828 2835167 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1121 14:44:21.566332 2835167 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-092258" context rescaled to 1 replicas
	I1121 14:44:21.605475 2835167 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.273369049s)
	I1121 14:44:21.608818 2835167 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1121 14:44:21.611888 2835167 addons.go:530] duration metric: took 1.816656544s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1121 14:44:23.063813 2835167 node_ready.go:57] node "old-k8s-version-092258" has "Ready":"False" status (will retry)
	W1121 14:44:25.563129 2835167 node_ready.go:57] node "old-k8s-version-092258" has "Ready":"False" status (will retry)
	W1121 14:44:27.564011 2835167 node_ready.go:57] node "old-k8s-version-092258" has "Ready":"False" status (will retry)
	W1121 14:44:30.063612 2835167 node_ready.go:57] node "old-k8s-version-092258" has "Ready":"False" status (will retry)
	W1121 14:44:32.562923 2835167 node_ready.go:57] node "old-k8s-version-092258" has "Ready":"False" status (will retry)
	I1121 14:44:33.563363 2835167 node_ready.go:49] node "old-k8s-version-092258" is "Ready"
	I1121 14:44:33.563395 2835167 node_ready.go:38] duration metric: took 12.503648731s for node "old-k8s-version-092258" to be "Ready" ...
	I1121 14:44:33.563409 2835167 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:44:33.563474 2835167 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:44:33.580001 2835167 api_server.go:72] duration metric: took 13.7851816s to wait for apiserver process to appear ...
	I1121 14:44:33.580026 2835167 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:44:33.580045 2835167 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:44:33.589120 2835167 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1121 14:44:33.590563 2835167 api_server.go:141] control plane version: v1.28.0
	I1121 14:44:33.590586 2835167 api_server.go:131] duration metric: took 10.553339ms to wait for apiserver health ...
	I1121 14:44:33.590594 2835167 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:44:33.595235 2835167 system_pods.go:59] 8 kube-system pods found
	I1121 14:44:33.595322 2835167 system_pods.go:61] "coredns-5dd5756b68-86stv" [6a48c3f2-f439-40e1-885b-5850f95d1ffc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:44:33.595346 2835167 system_pods.go:61] "etcd-old-k8s-version-092258" [bbb172b1-cd74-44e9-ba24-92155ea08be4] Running
	I1121 14:44:33.595390 2835167 system_pods.go:61] "kindnet-tfn5q" [6bec8380-6059-40d0-b0ed-6c3906f84591] Running
	I1121 14:44:33.595417 2835167 system_pods.go:61] "kube-apiserver-old-k8s-version-092258" [adf091a2-7b6d-4ba0-a537-9c5f7f93c471] Running
	I1121 14:44:33.595442 2835167 system_pods.go:61] "kube-controller-manager-old-k8s-version-092258" [01f77916-588d-469c-b175-3bbcdfe34ce8] Running
	I1121 14:44:33.595479 2835167 system_pods.go:61] "kube-proxy-tdwt5" [94e025a3-f19d-40ce-b6a6-9e2eb3b8f998] Running
	I1121 14:44:33.595506 2835167 system_pods.go:61] "kube-scheduler-old-k8s-version-092258" [7dfed185-93bd-4218-9c17-a6105d34022f] Running
	I1121 14:44:33.595532 2835167 system_pods.go:61] "storage-provisioner" [a31c361f-8fb6-4726-a554-e70884e4d16e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:44:33.595572 2835167 system_pods.go:74] duration metric: took 4.969827ms to wait for pod list to return data ...
	I1121 14:44:33.595601 2835167 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:44:33.599253 2835167 default_sa.go:45] found service account: "default"
	I1121 14:44:33.599325 2835167 default_sa.go:55] duration metric: took 3.703418ms for default service account to be created ...
	I1121 14:44:33.599363 2835167 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:44:33.603344 2835167 system_pods.go:86] 8 kube-system pods found
	I1121 14:44:33.603423 2835167 system_pods.go:89] "coredns-5dd5756b68-86stv" [6a48c3f2-f439-40e1-885b-5850f95d1ffc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:44:33.603457 2835167 system_pods.go:89] "etcd-old-k8s-version-092258" [bbb172b1-cd74-44e9-ba24-92155ea08be4] Running
	I1121 14:44:33.603486 2835167 system_pods.go:89] "kindnet-tfn5q" [6bec8380-6059-40d0-b0ed-6c3906f84591] Running
	I1121 14:44:33.603513 2835167 system_pods.go:89] "kube-apiserver-old-k8s-version-092258" [adf091a2-7b6d-4ba0-a537-9c5f7f93c471] Running
	I1121 14:44:33.603549 2835167 system_pods.go:89] "kube-controller-manager-old-k8s-version-092258" [01f77916-588d-469c-b175-3bbcdfe34ce8] Running
	I1121 14:44:33.603576 2835167 system_pods.go:89] "kube-proxy-tdwt5" [94e025a3-f19d-40ce-b6a6-9e2eb3b8f998] Running
	I1121 14:44:33.603600 2835167 system_pods.go:89] "kube-scheduler-old-k8s-version-092258" [7dfed185-93bd-4218-9c17-a6105d34022f] Running
	I1121 14:44:33.603640 2835167 system_pods.go:89] "storage-provisioner" [a31c361f-8fb6-4726-a554-e70884e4d16e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:44:33.603680 2835167 retry.go:31] will retry after 248.130267ms: missing components: kube-dns
	I1121 14:44:33.863548 2835167 system_pods.go:86] 8 kube-system pods found
	I1121 14:44:33.863646 2835167 system_pods.go:89] "coredns-5dd5756b68-86stv" [6a48c3f2-f439-40e1-885b-5850f95d1ffc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:44:33.863677 2835167 system_pods.go:89] "etcd-old-k8s-version-092258" [bbb172b1-cd74-44e9-ba24-92155ea08be4] Running
	I1121 14:44:33.863699 2835167 system_pods.go:89] "kindnet-tfn5q" [6bec8380-6059-40d0-b0ed-6c3906f84591] Running
	I1121 14:44:33.863735 2835167 system_pods.go:89] "kube-apiserver-old-k8s-version-092258" [adf091a2-7b6d-4ba0-a537-9c5f7f93c471] Running
	I1121 14:44:33.863762 2835167 system_pods.go:89] "kube-controller-manager-old-k8s-version-092258" [01f77916-588d-469c-b175-3bbcdfe34ce8] Running
	I1121 14:44:33.863787 2835167 system_pods.go:89] "kube-proxy-tdwt5" [94e025a3-f19d-40ce-b6a6-9e2eb3b8f998] Running
	I1121 14:44:33.863827 2835167 system_pods.go:89] "kube-scheduler-old-k8s-version-092258" [7dfed185-93bd-4218-9c17-a6105d34022f] Running
	I1121 14:44:33.863857 2835167 system_pods.go:89] "storage-provisioner" [a31c361f-8fb6-4726-a554-e70884e4d16e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:44:33.863904 2835167 retry.go:31] will retry after 379.807267ms: missing components: kube-dns
	I1121 14:44:34.248297 2835167 system_pods.go:86] 8 kube-system pods found
	I1121 14:44:34.248331 2835167 system_pods.go:89] "coredns-5dd5756b68-86stv" [6a48c3f2-f439-40e1-885b-5850f95d1ffc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:44:34.248338 2835167 system_pods.go:89] "etcd-old-k8s-version-092258" [bbb172b1-cd74-44e9-ba24-92155ea08be4] Running
	I1121 14:44:34.248344 2835167 system_pods.go:89] "kindnet-tfn5q" [6bec8380-6059-40d0-b0ed-6c3906f84591] Running
	I1121 14:44:34.248348 2835167 system_pods.go:89] "kube-apiserver-old-k8s-version-092258" [adf091a2-7b6d-4ba0-a537-9c5f7f93c471] Running
	I1121 14:44:34.248352 2835167 system_pods.go:89] "kube-controller-manager-old-k8s-version-092258" [01f77916-588d-469c-b175-3bbcdfe34ce8] Running
	I1121 14:44:34.248356 2835167 system_pods.go:89] "kube-proxy-tdwt5" [94e025a3-f19d-40ce-b6a6-9e2eb3b8f998] Running
	I1121 14:44:34.248360 2835167 system_pods.go:89] "kube-scheduler-old-k8s-version-092258" [7dfed185-93bd-4218-9c17-a6105d34022f] Running
	I1121 14:44:34.248365 2835167 system_pods.go:89] "storage-provisioner" [a31c361f-8fb6-4726-a554-e70884e4d16e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:44:34.248380 2835167 retry.go:31] will retry after 418.10052ms: missing components: kube-dns
	I1121 14:44:34.670581 2835167 system_pods.go:86] 8 kube-system pods found
	I1121 14:44:34.670670 2835167 system_pods.go:89] "coredns-5dd5756b68-86stv" [6a48c3f2-f439-40e1-885b-5850f95d1ffc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:44:34.670687 2835167 system_pods.go:89] "etcd-old-k8s-version-092258" [bbb172b1-cd74-44e9-ba24-92155ea08be4] Running
	I1121 14:44:34.670694 2835167 system_pods.go:89] "kindnet-tfn5q" [6bec8380-6059-40d0-b0ed-6c3906f84591] Running
	I1121 14:44:34.670698 2835167 system_pods.go:89] "kube-apiserver-old-k8s-version-092258" [adf091a2-7b6d-4ba0-a537-9c5f7f93c471] Running
	I1121 14:44:34.670703 2835167 system_pods.go:89] "kube-controller-manager-old-k8s-version-092258" [01f77916-588d-469c-b175-3bbcdfe34ce8] Running
	I1121 14:44:34.670707 2835167 system_pods.go:89] "kube-proxy-tdwt5" [94e025a3-f19d-40ce-b6a6-9e2eb3b8f998] Running
	I1121 14:44:34.670711 2835167 system_pods.go:89] "kube-scheduler-old-k8s-version-092258" [7dfed185-93bd-4218-9c17-a6105d34022f] Running
	I1121 14:44:34.670736 2835167 system_pods.go:89] "storage-provisioner" [a31c361f-8fb6-4726-a554-e70884e4d16e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:44:34.670759 2835167 retry.go:31] will retry after 454.42102ms: missing components: kube-dns
	I1121 14:44:35.130522 2835167 system_pods.go:86] 8 kube-system pods found
	I1121 14:44:35.130555 2835167 system_pods.go:89] "coredns-5dd5756b68-86stv" [6a48c3f2-f439-40e1-885b-5850f95d1ffc] Running
	I1121 14:44:35.130563 2835167 system_pods.go:89] "etcd-old-k8s-version-092258" [bbb172b1-cd74-44e9-ba24-92155ea08be4] Running
	I1121 14:44:35.130568 2835167 system_pods.go:89] "kindnet-tfn5q" [6bec8380-6059-40d0-b0ed-6c3906f84591] Running
	I1121 14:44:35.130573 2835167 system_pods.go:89] "kube-apiserver-old-k8s-version-092258" [adf091a2-7b6d-4ba0-a537-9c5f7f93c471] Running
	I1121 14:44:35.130579 2835167 system_pods.go:89] "kube-controller-manager-old-k8s-version-092258" [01f77916-588d-469c-b175-3bbcdfe34ce8] Running
	I1121 14:44:35.130582 2835167 system_pods.go:89] "kube-proxy-tdwt5" [94e025a3-f19d-40ce-b6a6-9e2eb3b8f998] Running
	I1121 14:44:35.130586 2835167 system_pods.go:89] "kube-scheduler-old-k8s-version-092258" [7dfed185-93bd-4218-9c17-a6105d34022f] Running
	I1121 14:44:35.130590 2835167 system_pods.go:89] "storage-provisioner" [a31c361f-8fb6-4726-a554-e70884e4d16e] Running
	I1121 14:44:35.130598 2835167 system_pods.go:126] duration metric: took 1.531191935s to wait for k8s-apps to be running ...
	I1121 14:44:35.130606 2835167 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:44:35.130663 2835167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:44:35.145395 2835167 system_svc.go:56] duration metric: took 14.776546ms WaitForService to wait for kubelet
	I1121 14:44:35.145455 2835167 kubeadm.go:587] duration metric: took 15.350619907s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:44:35.145475 2835167 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:44:35.148334 2835167 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 14:44:35.148369 2835167 node_conditions.go:123] node cpu capacity is 2
	I1121 14:44:35.148382 2835167 node_conditions.go:105] duration metric: took 2.896581ms to run NodePressure ...
	I1121 14:44:35.148393 2835167 start.go:242] waiting for startup goroutines ...
	I1121 14:44:35.148401 2835167 start.go:247] waiting for cluster config update ...
	I1121 14:44:35.148412 2835167 start.go:256] writing updated cluster config ...
	I1121 14:44:35.148743 2835167 ssh_runner.go:195] Run: rm -f paused
	I1121 14:44:35.152681 2835167 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:44:35.157000 2835167 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-86stv" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:44:35.162556 2835167 pod_ready.go:94] pod "coredns-5dd5756b68-86stv" is "Ready"
	I1121 14:44:35.162601 2835167 pod_ready.go:86] duration metric: took 5.502719ms for pod "coredns-5dd5756b68-86stv" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:44:35.166472 2835167 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-092258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:44:35.171935 2835167 pod_ready.go:94] pod "etcd-old-k8s-version-092258" is "Ready"
	I1121 14:44:35.171965 2835167 pod_ready.go:86] duration metric: took 5.463835ms for pod "etcd-old-k8s-version-092258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:44:35.175582 2835167 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-092258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:44:35.181518 2835167 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-092258" is "Ready"
	I1121 14:44:35.181551 2835167 pod_ready.go:86] duration metric: took 5.941771ms for pod "kube-apiserver-old-k8s-version-092258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:44:35.184926 2835167 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-092258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:44:35.557460 2835167 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-092258" is "Ready"
	I1121 14:44:35.557489 2835167 pod_ready.go:86] duration metric: took 372.537001ms for pod "kube-controller-manager-old-k8s-version-092258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:44:35.757817 2835167 pod_ready.go:83] waiting for pod "kube-proxy-tdwt5" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:44:36.157592 2835167 pod_ready.go:94] pod "kube-proxy-tdwt5" is "Ready"
	I1121 14:44:36.157618 2835167 pod_ready.go:86] duration metric: took 399.771111ms for pod "kube-proxy-tdwt5" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:44:36.357529 2835167 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-092258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:44:36.757566 2835167 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-092258" is "Ready"
	I1121 14:44:36.757596 2835167 pod_ready.go:86] duration metric: took 400.036784ms for pod "kube-scheduler-old-k8s-version-092258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:44:36.757610 2835167 pod_ready.go:40] duration metric: took 1.604896006s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:44:36.818445 2835167 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1121 14:44:36.821296 2835167 out.go:203] 
	W1121 14:44:36.824281 2835167 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1121 14:44:36.827383 2835167 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1121 14:44:36.830301 2835167 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-092258" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	ab7b2c1339a58       1611cd07b61d5       7 seconds ago       Running             busybox                   0                   befa3559e32d9       busybox                                          default
	4fa0544fe52cc       97e04611ad434       13 seconds ago      Running             coredns                   0                   b58b59f73a24b       coredns-5dd5756b68-86stv                         kube-system
	c6ace07879b84       ba04bb24b9575       13 seconds ago      Running             storage-provisioner       0                   3680e435bb193       storage-provisioner                              kube-system
	495595ef81ee7       b1a8c6f707935       24 seconds ago      Running             kindnet-cni               0                   f4ddede8f051f       kindnet-tfn5q                                    kube-system
	630ebb9fe56a1       940f54a5bcae9       26 seconds ago      Running             kube-proxy                0                   1812faa70a69a       kube-proxy-tdwt5                                 kube-system
	331a280f7d8fb       46cc66ccc7c19       46 seconds ago      Running             kube-controller-manager   0                   9d7554dad7608       kube-controller-manager-old-k8s-version-092258   kube-system
	46391c1bd1fc7       762dce4090c5f       46 seconds ago      Running             kube-scheduler            0                   88bf0a72d6a98       kube-scheduler-old-k8s-version-092258            kube-system
	32a76684e0ad4       9cdd6470f48c8       46 seconds ago      Running             etcd                      0                   edaf6d16372ae       etcd-old-k8s-version-092258                      kube-system
	2e1cd1261e99f       00543d2fe5d71       46 seconds ago      Running             kube-apiserver            0                   58f4b63de6fd5       kube-apiserver-old-k8s-version-092258            kube-system
	
	
	==> containerd <==
	Nov 21 14:44:33 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:33.681687355Z" level=info msg="CreateContainer within sandbox \"3680e435bb193d749f6cac5ee0a23ca21a777ba606c46a9f454cb42ef4060e47\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"c6ace07879b84e705bc8b532f8cd9162404b63746ad9faeae44e245e26539ead\""
	Nov 21 14:44:33 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:33.685318192Z" level=info msg="StartContainer for \"c6ace07879b84e705bc8b532f8cd9162404b63746ad9faeae44e245e26539ead\""
	Nov 21 14:44:33 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:33.689247614Z" level=info msg="connecting to shim c6ace07879b84e705bc8b532f8cd9162404b63746ad9faeae44e245e26539ead" address="unix:///run/containerd/s/6e69ecb1899b9e75727f8fe7f211e1f82d40f965205bb1565eeae343c2bafd56" protocol=ttrpc version=3
	Nov 21 14:44:33 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:33.690732150Z" level=info msg="CreateContainer within sandbox \"b58b59f73a24bb52a5f6c210ec1d0dfbddbbc55dbc0fd609423879994aa0b8ea\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4fa0544fe52cc0b0b57fcb28182f1f20dc7c79b3ef53dcb6dc677efecd5a9cc9\""
	Nov 21 14:44:33 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:33.693417924Z" level=info msg="StartContainer for \"4fa0544fe52cc0b0b57fcb28182f1f20dc7c79b3ef53dcb6dc677efecd5a9cc9\""
	Nov 21 14:44:33 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:33.696432221Z" level=info msg="connecting to shim 4fa0544fe52cc0b0b57fcb28182f1f20dc7c79b3ef53dcb6dc677efecd5a9cc9" address="unix:///run/containerd/s/a5f36c12d3eba8a08addb4ff6f6c45f4b1f35adc7b831563646c8ea27992d003" protocol=ttrpc version=3
	Nov 21 14:44:33 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:33.785690082Z" level=info msg="StartContainer for \"4fa0544fe52cc0b0b57fcb28182f1f20dc7c79b3ef53dcb6dc677efecd5a9cc9\" returns successfully"
	Nov 21 14:44:33 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:33.815503407Z" level=info msg="StartContainer for \"c6ace07879b84e705bc8b532f8cd9162404b63746ad9faeae44e245e26539ead\" returns successfully"
	Nov 21 14:44:37 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:37.378935963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:4fd396a4-7f86-4bac-b99a-f7427bb5deb9,Namespace:default,Attempt:0,}"
	Nov 21 14:44:37 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:37.481124358Z" level=info msg="connecting to shim befa3559e32d903c1abf0bc725ae5f12a26cdbb8b3fb4a57980282d9931d9d26" address="unix:///run/containerd/s/71dcf6bf5df9beb4a3d248e771df5a382c0db1f3a2b82a021424cdeb0bc07ccb" namespace=k8s.io protocol=ttrpc version=3
	Nov 21 14:44:37 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:37.544176700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:4fd396a4-7f86-4bac-b99a-f7427bb5deb9,Namespace:default,Attempt:0,} returns sandbox id \"befa3559e32d903c1abf0bc725ae5f12a26cdbb8b3fb4a57980282d9931d9d26\""
	Nov 21 14:44:37 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:37.546307355Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 21 14:44:39 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:39.885153902Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:44:39 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:39.887206340Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937188"
	Nov 21 14:44:39 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:39.889567142Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:44:39 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:39.893770222Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:44:39 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:39.894536985Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.348184988s"
	Nov 21 14:44:39 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:39.894577862Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 21 14:44:39 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:39.898592935Z" level=info msg="CreateContainer within sandbox \"befa3559e32d903c1abf0bc725ae5f12a26cdbb8b3fb4a57980282d9931d9d26\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 21 14:44:39 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:39.912721761Z" level=info msg="Container ab7b2c1339a58ca880ca0312fd5f7d62085c7261261bf3758b721a01af22d534: CDI devices from CRI Config.CDIDevices: []"
	Nov 21 14:44:39 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:39.926024035Z" level=info msg="CreateContainer within sandbox \"befa3559e32d903c1abf0bc725ae5f12a26cdbb8b3fb4a57980282d9931d9d26\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"ab7b2c1339a58ca880ca0312fd5f7d62085c7261261bf3758b721a01af22d534\""
	Nov 21 14:44:39 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:39.927048695Z" level=info msg="StartContainer for \"ab7b2c1339a58ca880ca0312fd5f7d62085c7261261bf3758b721a01af22d534\""
	Nov 21 14:44:39 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:39.928757405Z" level=info msg="connecting to shim ab7b2c1339a58ca880ca0312fd5f7d62085c7261261bf3758b721a01af22d534" address="unix:///run/containerd/s/71dcf6bf5df9beb4a3d248e771df5a382c0db1f3a2b82a021424cdeb0bc07ccb" protocol=ttrpc version=3
	Nov 21 14:44:39 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:39.996819622Z" level=info msg="StartContainer for \"ab7b2c1339a58ca880ca0312fd5f7d62085c7261261bf3758b721a01af22d534\" returns successfully"
	Nov 21 14:44:46 old-k8s-version-092258 containerd[760]: E1121 14:44:46.197863     760 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [4fa0544fe52cc0b0b57fcb28182f1f20dc7c79b3ef53dcb6dc677efecd5a9cc9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36979 - 27014 "HINFO IN 2294269810657567619.5005884824654199478. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03500164s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-092258
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-092258
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=old-k8s-version-092258
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_44_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:44:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-092258
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:44:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:44:37 +0000   Fri, 21 Nov 2025 14:44:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:44:37 +0000   Fri, 21 Nov 2025 14:44:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:44:37 +0000   Fri, 21 Nov 2025 14:44:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:44:37 +0000   Fri, 21 Nov 2025 14:44:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-092258
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                9e4fe947-6f95-4914-9cd3-ccd713480a21
	  Boot ID:                    41b0e09d-5a9a-49c9-8980-dca608ba3fce
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-86stv                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     27s
	  kube-system                 etcd-old-k8s-version-092258                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         40s
	  kube-system                 kindnet-tfn5q                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-092258             250m (12%)    0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-092258    200m (10%)    0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-tdwt5                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-092258             100m (5%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 40s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s   kubelet          Node old-k8s-version-092258 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s   kubelet          Node old-k8s-version-092258 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s   kubelet          Node old-k8s-version-092258 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  40s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           28s   node-controller  Node old-k8s-version-092258 event: Registered Node old-k8s-version-092258 in Controller
	  Normal  NodeReady                14s   kubelet          Node old-k8s-version-092258 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 13:02] overlayfs: idmapped layers are currently not supported
	[Nov21 13:03] overlayfs: idmapped layers are currently not supported
	[Nov21 13:06] overlayfs: idmapped layers are currently not supported
	[Nov21 13:08] overlayfs: idmapped layers are currently not supported
	[Nov21 13:09] overlayfs: idmapped layers are currently not supported
	[Nov21 13:10] overlayfs: idmapped layers are currently not supported
	[ +19.808801] overlayfs: idmapped layers are currently not supported
	[Nov21 13:11] overlayfs: idmapped layers are currently not supported
	[Nov21 13:12] overlayfs: idmapped layers are currently not supported
	[Nov21 13:13] overlayfs: idmapped layers are currently not supported
	[Nov21 13:14] overlayfs: idmapped layers are currently not supported
	[Nov21 13:15] overlayfs: idmapped layers are currently not supported
	[ +16.772572] overlayfs: idmapped layers are currently not supported
	[Nov21 13:16] overlayfs: idmapped layers are currently not supported
	[Nov21 13:17] overlayfs: idmapped layers are currently not supported
	[ +27.396777] overlayfs: idmapped layers are currently not supported
	[Nov21 13:18] overlayfs: idmapped layers are currently not supported
	[ +25.430119] overlayfs: idmapped layers are currently not supported
	[Nov21 13:19] overlayfs: idmapped layers are currently not supported
	[Nov21 13:20] overlayfs: idmapped layers are currently not supported
	[Nov21 13:21] overlayfs: idmapped layers are currently not supported
	[Nov21 13:22] overlayfs: idmapped layers are currently not supported
	[Nov21 13:23] overlayfs: idmapped layers are currently not supported
	[Nov21 13:24] overlayfs: idmapped layers are currently not supported
	[Nov21 13:55] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [32a76684e0ad48afa24dffa56bbd612225875cea5526f2fe91da5620cdd3737e] <==
	{"level":"info","ts":"2025-11-21T14:44:00.857351Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-21T14:44:00.860773Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-21T14:44:00.860806Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-21T14:44:00.857529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-21T14:44:00.861198Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-21T14:44:00.857558Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-21T14:44:00.857577Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-21T14:44:01.029245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-21T14:44:01.029466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-21T14:44:01.02957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-11-21T14:44:01.029683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-11-21T14:44:01.029774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-21T14:44:01.029858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-11-21T14:44:01.029946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-21T14:44:01.033213Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-092258 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-21T14:44:01.033415Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-21T14:44:01.034523Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-21T14:44:01.034765Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:44:01.035192Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-21T14:44:01.036885Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:44:01.04112Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:44:01.041303Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:44:01.055053Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-21T14:44:01.060057Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-21T14:44:01.060254Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 14:44:47 up 19:27,  0 user,  load average: 2.19, 3.09, 2.75
	Linux old-k8s-version-092258 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [495595ef81ee7d983a4b62890080114a468713ef14bf361720fb1ef51e30f35d] <==
	I1121 14:44:22.827794       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:44:22.828022       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1121 14:44:22.828147       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:44:22.828164       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:44:22.828175       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:44:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:44:23.024438       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:44:23.024516       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:44:23.024545       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:44:23.025736       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1121 14:44:23.224664       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:44:23.224747       1 metrics.go:72] Registering metrics
	I1121 14:44:23.224843       1 controller.go:711] "Syncing nftables rules"
	I1121 14:44:33.032002       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1121 14:44:33.032055       1 main.go:301] handling current node
	I1121 14:44:43.024839       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1121 14:44:43.024870       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2e1cd1261e99f5cf421f076a966eedd90258d75cd1735ec5e4bc9ae1d5576945] <==
	I1121 14:44:04.361764       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1121 14:44:04.361814       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1121 14:44:04.367613       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1121 14:44:04.367684       1 aggregator.go:166] initial CRD sync complete...
	I1121 14:44:04.367697       1 autoregister_controller.go:141] Starting autoregister controller
	I1121 14:44:04.367705       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 14:44:04.367714       1 cache.go:39] Caches are synced for autoregister controller
	I1121 14:44:04.374026       1 controller.go:624] quota admission added evaluator for: namespaces
	I1121 14:44:04.403168       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:44:04.422675       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1121 14:44:04.968994       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1121 14:44:04.974442       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1121 14:44:04.974470       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:44:05.722694       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:44:05.789801       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:44:05.889316       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1121 14:44:05.896471       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1121 14:44:05.897760       1 controller.go:624] quota admission added evaluator for: endpoints
	I1121 14:44:05.902883       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:44:06.203440       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1121 14:44:07.370770       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1121 14:44:07.383830       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1121 14:44:07.398248       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1121 14:44:19.834381       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1121 14:44:20.024025       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [331a280f7d8fb0893c46a22085825d84571038b23952dd64524b062bc7f08b74] <==
	I1121 14:44:19.212645       1 shared_informer.go:318] Caches are synced for endpoint
	I1121 14:44:19.212738       1 shared_informer.go:318] Caches are synced for HPA
	I1121 14:44:19.212773       1 shared_informer.go:318] Caches are synced for disruption
	I1121 14:44:19.212803       1 shared_informer.go:318] Caches are synced for attach detach
	I1121 14:44:19.627294       1 shared_informer.go:318] Caches are synced for garbage collector
	I1121 14:44:19.659367       1 shared_informer.go:318] Caches are synced for garbage collector
	I1121 14:44:19.659575       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1121 14:44:19.901701       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-tdwt5"
	I1121 14:44:19.929907       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-tfn5q"
	I1121 14:44:20.044696       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1121 14:44:20.152846       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-v7mnp"
	I1121 14:44:20.183528       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-86stv"
	I1121 14:44:20.224114       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="183.103851ms"
	I1121 14:44:20.243991       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.819807ms"
	I1121 14:44:20.244109       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.6µs"
	I1121 14:44:21.107620       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1121 14:44:21.134973       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-v7mnp"
	I1121 14:44:21.155140       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="49.168473ms"
	I1121 14:44:21.171976       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.779877ms"
	I1121 14:44:21.172179       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.497µs"
	I1121 14:44:33.168291       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.391µs"
	I1121 14:44:33.193460       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="100.346µs"
	I1121 14:44:34.128063       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1121 14:44:34.848411       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.00588ms"
	I1121 14:44:34.848685       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="129.432µs"
	
	
	==> kube-proxy [630ebb9fe56a1bea1ef2dfe24de2086594eb0afbdaf547e41ce7c777d9eb7705] <==
	I1121 14:44:20.860188       1 server_others.go:69] "Using iptables proxy"
	I1121 14:44:20.878393       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1121 14:44:20.931156       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:44:20.936886       1 server_others.go:152] "Using iptables Proxier"
	I1121 14:44:20.936939       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1121 14:44:20.936948       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1121 14:44:20.936971       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1121 14:44:20.937761       1 server.go:846] "Version info" version="v1.28.0"
	I1121 14:44:20.937784       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:44:20.938544       1 config.go:188] "Starting service config controller"
	I1121 14:44:20.938593       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1121 14:44:20.938625       1 config.go:97] "Starting endpoint slice config controller"
	I1121 14:44:20.938635       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1121 14:44:20.940292       1 config.go:315] "Starting node config controller"
	I1121 14:44:20.940306       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1121 14:44:21.040184       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1121 14:44:21.040242       1 shared_informer.go:318] Caches are synced for service config
	I1121 14:44:21.040487       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [46391c1bd1fc737d22bd847c1d63f9bd14e4d892ef33d465e9204dc377dd6002] <==
	W1121 14:44:04.821909       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1121 14:44:04.822028       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1121 14:44:04.822197       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1121 14:44:04.822220       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1121 14:44:04.824601       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1121 14:44:04.825484       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1121 14:44:04.824774       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1121 14:44:04.825902       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1121 14:44:04.826065       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1121 14:44:04.826044       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1121 14:44:04.824991       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1121 14:44:04.826420       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1121 14:44:04.825212       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1121 14:44:04.825284       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1121 14:44:04.825347       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1121 14:44:04.825382       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1121 14:44:04.825431       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1121 14:44:04.824928       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1121 14:44:04.826802       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1121 14:44:04.826945       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1121 14:44:04.827063       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1121 14:44:04.827207       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1121 14:44:04.827355       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1121 14:44:04.827495       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1121 14:44:06.304765       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 21 14:44:19 old-k8s-version-092258 kubelet[1526]: I1121 14:44:19.149863    1526 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 21 14:44:19 old-k8s-version-092258 kubelet[1526]: I1121 14:44:19.150498    1526 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 21 14:44:19 old-k8s-version-092258 kubelet[1526]: I1121 14:44:19.918591    1526 topology_manager.go:215] "Topology Admit Handler" podUID="94e025a3-f19d-40ce-b6a6-9e2eb3b8f998" podNamespace="kube-system" podName="kube-proxy-tdwt5"
	Nov 21 14:44:19 old-k8s-version-092258 kubelet[1526]: I1121 14:44:19.954210    1526 topology_manager.go:215] "Topology Admit Handler" podUID="6bec8380-6059-40d0-b0ed-6c3906f84591" podNamespace="kube-system" podName="kindnet-tfn5q"
	Nov 21 14:44:19 old-k8s-version-092258 kubelet[1526]: I1121 14:44:19.980360    1526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/94e025a3-f19d-40ce-b6a6-9e2eb3b8f998-kube-proxy\") pod \"kube-proxy-tdwt5\" (UID: \"94e025a3-f19d-40ce-b6a6-9e2eb3b8f998\") " pod="kube-system/kube-proxy-tdwt5"
	Nov 21 14:44:19 old-k8s-version-092258 kubelet[1526]: I1121 14:44:19.980619    1526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94e025a3-f19d-40ce-b6a6-9e2eb3b8f998-xtables-lock\") pod \"kube-proxy-tdwt5\" (UID: \"94e025a3-f19d-40ce-b6a6-9e2eb3b8f998\") " pod="kube-system/kube-proxy-tdwt5"
	Nov 21 14:44:19 old-k8s-version-092258 kubelet[1526]: I1121 14:44:19.980760    1526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94e025a3-f19d-40ce-b6a6-9e2eb3b8f998-lib-modules\") pod \"kube-proxy-tdwt5\" (UID: \"94e025a3-f19d-40ce-b6a6-9e2eb3b8f998\") " pod="kube-system/kube-proxy-tdwt5"
	Nov 21 14:44:19 old-k8s-version-092258 kubelet[1526]: I1121 14:44:19.980886    1526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6bec8380-6059-40d0-b0ed-6c3906f84591-cni-cfg\") pod \"kindnet-tfn5q\" (UID: \"6bec8380-6059-40d0-b0ed-6c3906f84591\") " pod="kube-system/kindnet-tfn5q"
	Nov 21 14:44:19 old-k8s-version-092258 kubelet[1526]: I1121 14:44:19.981004    1526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6bec8380-6059-40d0-b0ed-6c3906f84591-xtables-lock\") pod \"kindnet-tfn5q\" (UID: \"6bec8380-6059-40d0-b0ed-6c3906f84591\") " pod="kube-system/kindnet-tfn5q"
	Nov 21 14:44:19 old-k8s-version-092258 kubelet[1526]: I1121 14:44:19.981145    1526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6bec8380-6059-40d0-b0ed-6c3906f84591-lib-modules\") pod \"kindnet-tfn5q\" (UID: \"6bec8380-6059-40d0-b0ed-6c3906f84591\") " pod="kube-system/kindnet-tfn5q"
	Nov 21 14:44:19 old-k8s-version-092258 kubelet[1526]: I1121 14:44:19.981319    1526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5lf7\" (UniqueName: \"kubernetes.io/projected/6bec8380-6059-40d0-b0ed-6c3906f84591-kube-api-access-m5lf7\") pod \"kindnet-tfn5q\" (UID: \"6bec8380-6059-40d0-b0ed-6c3906f84591\") " pod="kube-system/kindnet-tfn5q"
	Nov 21 14:44:19 old-k8s-version-092258 kubelet[1526]: I1121 14:44:19.981442    1526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2rxs\" (UniqueName: \"kubernetes.io/projected/94e025a3-f19d-40ce-b6a6-9e2eb3b8f998-kube-api-access-g2rxs\") pod \"kube-proxy-tdwt5\" (UID: \"94e025a3-f19d-40ce-b6a6-9e2eb3b8f998\") " pod="kube-system/kube-proxy-tdwt5"
	Nov 21 14:44:22 old-k8s-version-092258 kubelet[1526]: I1121 14:44:22.794618    1526 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-tdwt5" podStartSLOduration=3.794572825 podCreationTimestamp="2025-11-21 14:44:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:44:21.775006119 +0000 UTC m=+14.440254134" watchObservedRunningTime="2025-11-21 14:44:22.794572825 +0000 UTC m=+15.459820816"
	Nov 21 14:44:33 old-k8s-version-092258 kubelet[1526]: I1121 14:44:33.136665    1526 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 21 14:44:33 old-k8s-version-092258 kubelet[1526]: I1121 14:44:33.168328    1526 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-tfn5q" podStartSLOduration=12.176504309 podCreationTimestamp="2025-11-21 14:44:19 +0000 UTC" firstStartedPulling="2025-11-21 14:44:20.481898213 +0000 UTC m=+13.147146196" lastFinishedPulling="2025-11-21 14:44:22.473675721 +0000 UTC m=+15.138923704" observedRunningTime="2025-11-21 14:44:22.795889554 +0000 UTC m=+15.461137546" watchObservedRunningTime="2025-11-21 14:44:33.168281817 +0000 UTC m=+25.833529808"
	Nov 21 14:44:33 old-k8s-version-092258 kubelet[1526]: I1121 14:44:33.168810    1526 topology_manager.go:215] "Topology Admit Handler" podUID="6a48c3f2-f439-40e1-885b-5850f95d1ffc" podNamespace="kube-system" podName="coredns-5dd5756b68-86stv"
	Nov 21 14:44:33 old-k8s-version-092258 kubelet[1526]: I1121 14:44:33.174944    1526 topology_manager.go:215] "Topology Admit Handler" podUID="a31c361f-8fb6-4726-a554-e70884e4d16e" podNamespace="kube-system" podName="storage-provisioner"
	Nov 21 14:44:33 old-k8s-version-092258 kubelet[1526]: I1121 14:44:33.200360    1526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktnhs\" (UniqueName: \"kubernetes.io/projected/6a48c3f2-f439-40e1-885b-5850f95d1ffc-kube-api-access-ktnhs\") pod \"coredns-5dd5756b68-86stv\" (UID: \"6a48c3f2-f439-40e1-885b-5850f95d1ffc\") " pod="kube-system/coredns-5dd5756b68-86stv"
	Nov 21 14:44:33 old-k8s-version-092258 kubelet[1526]: I1121 14:44:33.200594    1526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a31c361f-8fb6-4726-a554-e70884e4d16e-tmp\") pod \"storage-provisioner\" (UID: \"a31c361f-8fb6-4726-a554-e70884e4d16e\") " pod="kube-system/storage-provisioner"
	Nov 21 14:44:33 old-k8s-version-092258 kubelet[1526]: I1121 14:44:33.200711    1526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxf7g\" (UniqueName: \"kubernetes.io/projected/a31c361f-8fb6-4726-a554-e70884e4d16e-kube-api-access-xxf7g\") pod \"storage-provisioner\" (UID: \"a31c361f-8fb6-4726-a554-e70884e4d16e\") " pod="kube-system/storage-provisioner"
	Nov 21 14:44:33 old-k8s-version-092258 kubelet[1526]: I1121 14:44:33.200832    1526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a48c3f2-f439-40e1-885b-5850f95d1ffc-config-volume\") pod \"coredns-5dd5756b68-86stv\" (UID: \"6a48c3f2-f439-40e1-885b-5850f95d1ffc\") " pod="kube-system/coredns-5dd5756b68-86stv"
	Nov 21 14:44:34 old-k8s-version-092258 kubelet[1526]: I1121 14:44:34.812385    1526 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.812339422 podCreationTimestamp="2025-11-21 14:44:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:44:34.811999567 +0000 UTC m=+27.477247550" watchObservedRunningTime="2025-11-21 14:44:34.812339422 +0000 UTC m=+27.477587405"
	Nov 21 14:44:34 old-k8s-version-092258 kubelet[1526]: I1121 14:44:34.830835    1526 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-86stv" podStartSLOduration=14.83078559 podCreationTimestamp="2025-11-21 14:44:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:44:34.830509339 +0000 UTC m=+27.495757330" watchObservedRunningTime="2025-11-21 14:44:34.83078559 +0000 UTC m=+27.496033581"
	Nov 21 14:44:37 old-k8s-version-092258 kubelet[1526]: I1121 14:44:37.064261    1526 topology_manager.go:215] "Topology Admit Handler" podUID="4fd396a4-7f86-4bac-b99a-f7427bb5deb9" podNamespace="default" podName="busybox"
	Nov 21 14:44:37 old-k8s-version-092258 kubelet[1526]: I1121 14:44:37.128201    1526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmbgq\" (UniqueName: \"kubernetes.io/projected/4fd396a4-7f86-4bac-b99a-f7427bb5deb9-kube-api-access-tmbgq\") pod \"busybox\" (UID: \"4fd396a4-7f86-4bac-b99a-f7427bb5deb9\") " pod="default/busybox"
	
	
	==> storage-provisioner [c6ace07879b84e705bc8b532f8cd9162404b63746ad9faeae44e245e26539ead] <==
	I1121 14:44:33.821827       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1121 14:44:33.835269       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 14:44:33.835522       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1121 14:44:33.844745       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 14:44:33.845108       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-092258_43824c0e-5444-4d63-9465-8f0bcb9e3d2b!
	I1121 14:44:33.845246       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d6d6cfaa-85d7-41d0-9ba2-d501adb4d7fd", APIVersion:"v1", ResourceVersion:"394", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-092258_43824c0e-5444-4d63-9465-8f0bcb9e3d2b became leader
	I1121 14:44:33.946309       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-092258_43824c0e-5444-4d63-9465-8f0bcb9e3d2b!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-092258 -n old-k8s-version-092258
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-092258 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-092258
helpers_test.go:243: (dbg) docker inspect old-k8s-version-092258:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "06d5dd86afe167e6eb08ea3044c0d6356df71a87f2b7acc2267f870459c4f2cd",
	        "Created": "2025-11-21T14:43:42.553015288Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2835562,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:43:42.615467666Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/06d5dd86afe167e6eb08ea3044c0d6356df71a87f2b7acc2267f870459c4f2cd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/06d5dd86afe167e6eb08ea3044c0d6356df71a87f2b7acc2267f870459c4f2cd/hostname",
	        "HostsPath": "/var/lib/docker/containers/06d5dd86afe167e6eb08ea3044c0d6356df71a87f2b7acc2267f870459c4f2cd/hosts",
	        "LogPath": "/var/lib/docker/containers/06d5dd86afe167e6eb08ea3044c0d6356df71a87f2b7acc2267f870459c4f2cd/06d5dd86afe167e6eb08ea3044c0d6356df71a87f2b7acc2267f870459c4f2cd-json.log",
	        "Name": "/old-k8s-version-092258",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-092258:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-092258",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "06d5dd86afe167e6eb08ea3044c0d6356df71a87f2b7acc2267f870459c4f2cd",
	                "LowerDir": "/var/lib/docker/overlay2/8452e0c048f2d0756f64c494882e5db8b7ecd5ac7b4b99aa190200898d89fa81-init/diff:/var/lib/docker/overlay2/789a4b9f9866e585907664b1eaf98d94438dbf699e0511f3ca5ba5ea682b005e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8452e0c048f2d0756f64c494882e5db8b7ecd5ac7b4b99aa190200898d89fa81/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8452e0c048f2d0756f64c494882e5db8b7ecd5ac7b4b99aa190200898d89fa81/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8452e0c048f2d0756f64c494882e5db8b7ecd5ac7b4b99aa190200898d89fa81/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-092258",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-092258/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-092258",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-092258",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-092258",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "858034611e5cf97ef820625d4dcf77e9b3d1510529f8fc62d29cb6c8391e9b31",
	            "SandboxKey": "/var/run/docker/netns/858034611e5c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36720"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36721"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36724"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36722"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36723"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-092258": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:d3:7f:90:c5:6d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "02cac79c841e49103d05ede51175e2b52d6dd809de4e55337963bb73586b9563",
	                    "EndpointID": "d33cdf4d1c16a9f5945ab24f1a11153f9cd4665673d1a383064fc3d56825842f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-092258",
	                        "06d5dd86afe1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-092258 -n old-k8s-version-092258
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-092258 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-092258 logs -n 25: (1.17643389s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ ssh     │ -p cilium-650772 sudo cat /etc/docker/daemon.json                                                                                                                                                                                                   │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo docker system info                                                                                                                                                                                                            │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                           │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo containerd config dump                                                                                                                                                                                                        │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo crio config                                                                                                                                                                                                                   │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ delete  │ -p cilium-650772                                                                                                                                                                                                                                    │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ start   │ -p cert-expiration-184410 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-184410   │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ force-systemd-env-041746 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-041746 │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ delete  │ -p force-systemd-env-041746                                                                                                                                                                                                                         │ force-systemd-env-041746 │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:43 UTC │
	│ start   │ -p cert-options-035007 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-035007      │ jenkins │ v1.37.0 │ 21 Nov 25 14:43 UTC │ 21 Nov 25 14:43 UTC │
	│ ssh     │ cert-options-035007 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-035007      │ jenkins │ v1.37.0 │ 21 Nov 25 14:43 UTC │ 21 Nov 25 14:43 UTC │
	│ ssh     │ -p cert-options-035007 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-035007      │ jenkins │ v1.37.0 │ 21 Nov 25 14:43 UTC │ 21 Nov 25 14:43 UTC │
	│ delete  │ -p cert-options-035007                                                                                                                                                                                                                              │ cert-options-035007      │ jenkins │ v1.37.0 │ 21 Nov 25 14:43 UTC │ 21 Nov 25 14:43 UTC │
	│ start   │ -p old-k8s-version-092258 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:43 UTC │ 21 Nov 25 14:44 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:43:36
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:43:36.383330 2835167 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:43:36.383513 2835167 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:43:36.383524 2835167 out.go:374] Setting ErrFile to fd 2...
	I1121 14:43:36.383530 2835167 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:43:36.383828 2835167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-2633933/.minikube/bin
	I1121 14:43:36.384345 2835167 out.go:368] Setting JSON to false
	I1121 14:43:36.385436 2835167 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":69965,"bootTime":1763666252,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1121 14:43:36.385513 2835167 start.go:143] virtualization:  
	I1121 14:43:36.390376 2835167 out.go:179] * [old-k8s-version-092258] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 14:43:36.394191 2835167 notify.go:221] Checking for updates...
	I1121 14:43:36.397425 2835167 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:43:36.400714 2835167 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:43:36.403761 2835167 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-2633933/kubeconfig
	I1121 14:43:36.406876 2835167 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-2633933/.minikube
	I1121 14:43:36.409847 2835167 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 14:43:36.413120 2835167 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:43:36.416775 2835167 config.go:182] Loaded profile config "cert-expiration-184410": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:43:36.416939 2835167 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:43:36.450855 2835167 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 14:43:36.450991 2835167 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:43:36.516624 2835167 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-21 14:43:36.506387596 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:43:36.516732 2835167 docker.go:319] overlay module found
	I1121 14:43:36.519985 2835167 out.go:179] * Using the docker driver based on user configuration
	I1121 14:43:36.522932 2835167 start.go:309] selected driver: docker
	I1121 14:43:36.522956 2835167 start.go:930] validating driver "docker" against <nil>
	I1121 14:43:36.522972 2835167 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:43:36.523794 2835167 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:43:36.580334 2835167 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-21 14:43:36.571381735 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:43:36.580506 2835167 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 14:43:36.580737 2835167 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:43:36.583854 2835167 out.go:179] * Using Docker driver with root privileges
	I1121 14:43:36.586764 2835167 cni.go:84] Creating CNI manager for ""
	I1121 14:43:36.586838 2835167 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:43:36.586852 2835167 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 14:43:36.586941 2835167 start.go:353] cluster config:
	{Name:old-k8s-version-092258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-092258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:43:36.590049 2835167 out.go:179] * Starting "old-k8s-version-092258" primary control-plane node in "old-k8s-version-092258" cluster
	I1121 14:43:36.592891 2835167 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1121 14:43:36.595918 2835167 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:43:36.598784 2835167 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1121 14:43:36.598825 2835167 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:43:36.598850 2835167 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1121 14:43:36.598866 2835167 cache.go:65] Caching tarball of preloaded images
	I1121 14:43:36.598958 2835167 preload.go:238] Found /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1121 14:43:36.598968 2835167 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1121 14:43:36.599136 2835167 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/config.json ...
	I1121 14:43:36.599165 2835167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/config.json: {Name:mk03fe35747f6c73b79e2daee9ca9c7b13210439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:43:36.618070 2835167 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:43:36.618098 2835167 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:43:36.618116 2835167 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:43:36.618138 2835167 start.go:360] acquireMachinesLock for old-k8s-version-092258: {Name:mkf21290144e8164ceda2548005b3a6e3ed2df4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:43:36.618251 2835167 start.go:364] duration metric: took 91.969µs to acquireMachinesLock for "old-k8s-version-092258"
	I1121 14:43:36.618280 2835167 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-092258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-092258 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:43:36.618362 2835167 start.go:125] createHost starting for "" (driver="docker")
	I1121 14:43:36.621674 2835167 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1121 14:43:36.621900 2835167 start.go:159] libmachine.API.Create for "old-k8s-version-092258" (driver="docker")
	I1121 14:43:36.621945 2835167 client.go:173] LocalClient.Create starting
	I1121 14:43:36.622015 2835167 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem
	I1121 14:43:36.622055 2835167 main.go:143] libmachine: Decoding PEM data...
	I1121 14:43:36.622071 2835167 main.go:143] libmachine: Parsing certificate...
	I1121 14:43:36.622122 2835167 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem
	I1121 14:43:36.622144 2835167 main.go:143] libmachine: Decoding PEM data...
	I1121 14:43:36.622155 2835167 main.go:143] libmachine: Parsing certificate...
	I1121 14:43:36.622504 2835167 cli_runner.go:164] Run: docker network inspect old-k8s-version-092258 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 14:43:36.638581 2835167 cli_runner.go:211] docker network inspect old-k8s-version-092258 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 14:43:36.638673 2835167 network_create.go:284] running [docker network inspect old-k8s-version-092258] to gather additional debugging logs...
	I1121 14:43:36.638695 2835167 cli_runner.go:164] Run: docker network inspect old-k8s-version-092258
	W1121 14:43:36.654171 2835167 cli_runner.go:211] docker network inspect old-k8s-version-092258 returned with exit code 1
	I1121 14:43:36.654200 2835167 network_create.go:287] error running [docker network inspect old-k8s-version-092258]: docker network inspect old-k8s-version-092258: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-092258 not found
	I1121 14:43:36.654221 2835167 network_create.go:289] output of [docker network inspect old-k8s-version-092258]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-092258 not found
	
	** /stderr **
	I1121 14:43:36.654336 2835167 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:43:36.670217 2835167 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c13a3bee40ff IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:9f:8e:c6:2a:d6} reservation:<nil>}
	I1121 14:43:36.670512 2835167 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1859e8fd5584 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:c6:00:f6:5b:96} reservation:<nil>}
	I1121 14:43:36.670770 2835167 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-44a9b6062c4d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:b5:31:a5:3d:f0} reservation:<nil>}
	I1121 14:43:36.671175 2835167 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a1a410}
	I1121 14:43:36.671200 2835167 network_create.go:124] attempt to create docker network old-k8s-version-092258 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1121 14:43:36.671260 2835167 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-092258 old-k8s-version-092258
	I1121 14:43:36.731269 2835167 network_create.go:108] docker network old-k8s-version-092258 192.168.76.0/24 created
	I1121 14:43:36.731302 2835167 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-092258" container
	I1121 14:43:36.731379 2835167 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 14:43:36.747231 2835167 cli_runner.go:164] Run: docker volume create old-k8s-version-092258 --label name.minikube.sigs.k8s.io=old-k8s-version-092258 --label created_by.minikube.sigs.k8s.io=true
	I1121 14:43:36.766444 2835167 oci.go:103] Successfully created a docker volume old-k8s-version-092258
	I1121 14:43:36.766529 2835167 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-092258-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-092258 --entrypoint /usr/bin/test -v old-k8s-version-092258:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1121 14:43:37.323084 2835167 oci.go:107] Successfully prepared a docker volume old-k8s-version-092258
	I1121 14:43:37.323160 2835167 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1121 14:43:37.323176 2835167 kic.go:194] Starting extracting preloaded images to volume ...
	I1121 14:43:37.323249 2835167 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-092258:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1121 14:43:42.479471 2835167 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-092258:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (5.156180728s)
	I1121 14:43:42.479504 2835167 kic.go:203] duration metric: took 5.156324945s to extract preloaded images to volume ...
	W1121 14:43:42.479641 2835167 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1121 14:43:42.479761 2835167 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 14:43:42.537202 2835167 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-092258 --name old-k8s-version-092258 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-092258 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-092258 --network old-k8s-version-092258 --ip 192.168.76.2 --volume old-k8s-version-092258:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1121 14:43:42.837167 2835167 cli_runner.go:164] Run: docker container inspect old-k8s-version-092258 --format={{.State.Running}}
	I1121 14:43:42.855671 2835167 cli_runner.go:164] Run: docker container inspect old-k8s-version-092258 --format={{.State.Status}}
	I1121 14:43:42.878990 2835167 cli_runner.go:164] Run: docker exec old-k8s-version-092258 stat /var/lib/dpkg/alternatives/iptables
	I1121 14:43:42.930669 2835167 oci.go:144] the created container "old-k8s-version-092258" has a running status.
	I1121 14:43:42.930708 2835167 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/old-k8s-version-092258/id_rsa...
	I1121 14:43:43.641994 2835167 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/old-k8s-version-092258/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 14:43:43.662600 2835167 cli_runner.go:164] Run: docker container inspect old-k8s-version-092258 --format={{.State.Status}}
	I1121 14:43:43.680831 2835167 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 14:43:43.680859 2835167 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-092258 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 14:43:43.723054 2835167 cli_runner.go:164] Run: docker container inspect old-k8s-version-092258 --format={{.State.Status}}
	I1121 14:43:43.743125 2835167 machine.go:94] provisionDockerMachine start ...
	I1121 14:43:43.743225 2835167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-092258
	I1121 14:43:43.760910 2835167 main.go:143] libmachine: Using SSH client type: native
	I1121 14:43:43.761356 2835167 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 36720 <nil> <nil>}
	I1121 14:43:43.761377 2835167 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:43:43.762044 2835167 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1121 14:43:46.904947 2835167 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-092258
	
	I1121 14:43:46.904969 2835167 ubuntu.go:182] provisioning hostname "old-k8s-version-092258"
	I1121 14:43:46.905064 2835167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-092258
	I1121 14:43:46.922185 2835167 main.go:143] libmachine: Using SSH client type: native
	I1121 14:43:46.922512 2835167 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 36720 <nil> <nil>}
	I1121 14:43:46.922533 2835167 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-092258 && echo "old-k8s-version-092258" | sudo tee /etc/hostname
	I1121 14:43:47.078378 2835167 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-092258
	
	I1121 14:43:47.078461 2835167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-092258
	I1121 14:43:47.102253 2835167 main.go:143] libmachine: Using SSH client type: native
	I1121 14:43:47.102562 2835167 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 36720 <nil> <nil>}
	I1121 14:43:47.102584 2835167 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-092258' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-092258/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-092258' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:43:47.245266 2835167 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:43:47.245351 2835167 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-2633933/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-2633933/.minikube}
	I1121 14:43:47.245378 2835167 ubuntu.go:190] setting up certificates
	I1121 14:43:47.245387 2835167 provision.go:84] configureAuth start
	I1121 14:43:47.245451 2835167 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-092258
	I1121 14:43:47.263540 2835167 provision.go:143] copyHostCerts
	I1121 14:43:47.263612 2835167 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.pem, removing ...
	I1121 14:43:47.263626 2835167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.pem
	I1121 14:43:47.263706 2835167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.pem (1082 bytes)
	I1121 14:43:47.263811 2835167 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-2633933/.minikube/cert.pem, removing ...
	I1121 14:43:47.263822 2835167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-2633933/.minikube/cert.pem
	I1121 14:43:47.263853 2835167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-2633933/.minikube/cert.pem (1123 bytes)
	I1121 14:43:47.263922 2835167 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-2633933/.minikube/key.pem, removing ...
	I1121 14:43:47.263932 2835167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-2633933/.minikube/key.pem
	I1121 14:43:47.263960 2835167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-2633933/.minikube/key.pem (1679 bytes)
	I1121 14:43:47.264022 2835167 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-092258 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-092258]
	I1121 14:43:48.319202 2835167 provision.go:177] copyRemoteCerts
	I1121 14:43:48.319298 2835167 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:43:48.319410 2835167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-092258
	I1121 14:43:48.336010 2835167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36720 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/old-k8s-version-092258/id_rsa Username:docker}
	I1121 14:43:48.436628 2835167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1121 14:43:48.454108 2835167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1121 14:43:48.472424 2835167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 14:43:48.490030 2835167 provision.go:87] duration metric: took 1.244618966s to configureAuth
	I1121 14:43:48.490068 2835167 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:43:48.490248 2835167 config.go:182] Loaded profile config "old-k8s-version-092258": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1121 14:43:48.490256 2835167 machine.go:97] duration metric: took 4.747108297s to provisionDockerMachine
	I1121 14:43:48.490263 2835167 client.go:176] duration metric: took 11.868306871s to LocalClient.Create
	I1121 14:43:48.490277 2835167 start.go:167] duration metric: took 11.868378746s to libmachine.API.Create "old-k8s-version-092258"
	I1121 14:43:48.490284 2835167 start.go:293] postStartSetup for "old-k8s-version-092258" (driver="docker")
	I1121 14:43:48.490298 2835167 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:43:48.490349 2835167 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:43:48.490386 2835167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-092258
	I1121 14:43:48.506758 2835167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36720 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/old-k8s-version-092258/id_rsa Username:docker}
	I1121 14:43:48.608899 2835167 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:43:48.612079 2835167 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:43:48.612112 2835167 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:43:48.612141 2835167 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-2633933/.minikube/addons for local assets ...
	I1121 14:43:48.612212 2835167 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-2633933/.minikube/files for local assets ...
	I1121 14:43:48.612293 2835167 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem -> 26357852.pem in /etc/ssl/certs
	I1121 14:43:48.612406 2835167 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:43:48.619568 2835167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem --> /etc/ssl/certs/26357852.pem (1708 bytes)
	I1121 14:43:48.636807 2835167 start.go:296] duration metric: took 146.508249ms for postStartSetup
	I1121 14:43:48.637286 2835167 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-092258
	I1121 14:43:48.653983 2835167 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/config.json ...
	I1121 14:43:48.654267 2835167 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:43:48.654326 2835167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-092258
	I1121 14:43:48.671301 2835167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36720 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/old-k8s-version-092258/id_rsa Username:docker}
	I1121 14:43:48.770419 2835167 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:43:48.775323 2835167 start.go:128] duration metric: took 12.156945443s to createHost
	I1121 14:43:48.775348 2835167 start.go:83] releasing machines lock for "old-k8s-version-092258", held for 12.157085763s
	I1121 14:43:48.775420 2835167 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-092258
	I1121 14:43:48.792336 2835167 ssh_runner.go:195] Run: cat /version.json
	I1121 14:43:48.792390 2835167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-092258
	I1121 14:43:48.792660 2835167 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:43:48.792747 2835167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-092258
	I1121 14:43:48.822571 2835167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36720 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/old-k8s-version-092258/id_rsa Username:docker}
	I1121 14:43:48.823165 2835167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36720 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/old-k8s-version-092258/id_rsa Username:docker}
	I1121 14:43:49.010028 2835167 ssh_runner.go:195] Run: systemctl --version
	I1121 14:43:49.017657 2835167 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:43:49.023491 2835167 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:43:49.023609 2835167 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:43:49.052303 2835167 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1121 14:43:49.052375 2835167 start.go:496] detecting cgroup driver to use...
	I1121 14:43:49.052424 2835167 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 14:43:49.052491 2835167 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1121 14:43:49.069013 2835167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1121 14:43:49.083455 2835167 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:43:49.083552 2835167 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:43:49.106526 2835167 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:43:49.129811 2835167 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:43:49.256720 2835167 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:43:49.389600 2835167 docker.go:234] disabling docker service ...
	I1121 14:43:49.389683 2835167 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:43:49.410151 2835167 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:43:49.423211 2835167 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:43:49.545870 2835167 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:43:49.674290 2835167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:43:49.688105 2835167 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:43:49.704461 2835167 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1121 14:43:49.715375 2835167 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1121 14:43:49.725179 2835167 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1121 14:43:49.725253 2835167 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1121 14:43:49.734991 2835167 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:43:49.745235 2835167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1121 14:43:49.754840 2835167 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:43:49.764086 2835167 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:43:49.773152 2835167 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1121 14:43:49.781841 2835167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1121 14:43:49.791026 2835167 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1121 14:43:49.800527 2835167 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:43:49.808398 2835167 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:43:49.815956 2835167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:43:49.931886 2835167 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1121 14:43:50.066706 2835167 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1121 14:43:50.066831 2835167 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1121 14:43:50.071006 2835167 start.go:564] Will wait 60s for crictl version
	I1121 14:43:50.071125 2835167 ssh_runner.go:195] Run: which crictl
	I1121 14:43:50.075393 2835167 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:43:50.118651 2835167 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1121 14:43:50.118773 2835167 ssh_runner.go:195] Run: containerd --version
	I1121 14:43:50.141212 2835167 ssh_runner.go:195] Run: containerd --version
	I1121 14:43:50.169638 2835167 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1121 14:43:50.172499 2835167 cli_runner.go:164] Run: docker network inspect old-k8s-version-092258 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:43:50.189507 2835167 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1121 14:43:50.198370 2835167 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:43:50.208962 2835167 kubeadm.go:884] updating cluster {Name:old-k8s-version-092258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-092258 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:43:50.209203 2835167 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1121 14:43:50.209272 2835167 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:43:50.234384 2835167 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:43:50.234409 2835167 containerd.go:534] Images already preloaded, skipping extraction
	I1121 14:43:50.234475 2835167 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:43:50.259395 2835167 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:43:50.259421 2835167 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:43:50.259430 2835167 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 containerd true true} ...
	I1121 14:43:50.259536 2835167 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-092258 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-092258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:43:50.259609 2835167 ssh_runner.go:195] Run: sudo crictl info
	I1121 14:43:50.287070 2835167 cni.go:84] Creating CNI manager for ""
	I1121 14:43:50.287095 2835167 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:43:50.287115 2835167 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:43:50.287139 2835167 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-092258 NodeName:old-k8s-version-092258 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:43:50.287271 2835167 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-092258"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:43:50.287342 2835167 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1121 14:43:50.295386 2835167 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:43:50.295454 2835167 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:43:50.303213 2835167 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1121 14:43:50.317127 2835167 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:43:50.331240 2835167 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I1121 14:43:50.344296 2835167 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:43:50.347919 2835167 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:43:50.357793 2835167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:43:50.483017 2835167 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:43:50.499630 2835167 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258 for IP: 192.168.76.2
	I1121 14:43:50.499697 2835167 certs.go:195] generating shared ca certs ...
	I1121 14:43:50.499729 2835167 certs.go:227] acquiring lock for ca certs: {Name:mk0a1b8efa9f1d453751b4f7afafeea16d7243a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:43:50.499912 2835167 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.key
	I1121 14:43:50.499982 2835167 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/proxy-client-ca.key
	I1121 14:43:50.500020 2835167 certs.go:257] generating profile certs ...
	I1121 14:43:50.500125 2835167 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/client.key
	I1121 14:43:50.500157 2835167 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/client.crt with IP's: []
	I1121 14:43:50.881389 2835167 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/client.crt ...
	I1121 14:43:50.881423 2835167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/client.crt: {Name:mkd66b37bd8f68df88ee391b1c0ae406d24100dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:43:50.881622 2835167 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/client.key ...
	I1121 14:43:50.881638 2835167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/client.key: {Name:mk87497e50632ba54cdc705e25ae82f0b49d923a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:43:50.881733 2835167 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/apiserver.key.fe0fc8ce
	I1121 14:43:50.881751 2835167 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/apiserver.crt.fe0fc8ce with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1121 14:43:51.368107 2835167 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/apiserver.crt.fe0fc8ce ...
	I1121 14:43:51.368141 2835167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/apiserver.crt.fe0fc8ce: {Name:mke3412122bd471676c09fe30765bbb879486748 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:43:51.368348 2835167 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/apiserver.key.fe0fc8ce ...
	I1121 14:43:51.368363 2835167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/apiserver.key.fe0fc8ce: {Name:mkb54d05c24cffdedc4d0fc59e5780f32a7a4815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:43:51.368463 2835167 certs.go:382] copying /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/apiserver.crt.fe0fc8ce -> /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/apiserver.crt
	I1121 14:43:51.368560 2835167 certs.go:386] copying /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/apiserver.key.fe0fc8ce -> /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/apiserver.key
	I1121 14:43:51.368628 2835167 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/proxy-client.key
	I1121 14:43:51.368647 2835167 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/proxy-client.crt with IP's: []
	I1121 14:43:51.447326 2835167 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/proxy-client.crt ...
	I1121 14:43:51.447360 2835167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/proxy-client.crt: {Name:mk8fd112c818af834b5d68c83f8c92f6291ef45d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:43:51.447577 2835167 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/proxy-client.key ...
	I1121 14:43:51.447598 2835167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/proxy-client.key: {Name:mk461afd810cf943501ef59a65730b33eecea0e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:43:51.447801 2835167 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/2635785.pem (1338 bytes)
	W1121 14:43:51.447843 2835167 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/2635785_empty.pem, impossibly tiny 0 bytes
	I1121 14:43:51.447858 2835167 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:43:51.447889 2835167 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem (1082 bytes)
	I1121 14:43:51.447920 2835167 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:43:51.447947 2835167 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/key.pem (1679 bytes)
	I1121 14:43:51.447993 2835167 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem (1708 bytes)
	I1121 14:43:51.448566 2835167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:43:51.466854 2835167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 14:43:51.484620 2835167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:43:51.502526 2835167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:43:51.521430 2835167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1121 14:43:51.540179 2835167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 14:43:51.560949 2835167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:43:51.579376 2835167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1121 14:43:51.598197 2835167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:43:51.615934 2835167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/2635785.pem --> /usr/share/ca-certificates/2635785.pem (1338 bytes)
	I1121 14:43:51.634213 2835167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem --> /usr/share/ca-certificates/26357852.pem (1708 bytes)
	I1121 14:43:51.652095 2835167 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:43:51.664901 2835167 ssh_runner.go:195] Run: openssl version
	I1121 14:43:51.671463 2835167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:43:51.680109 2835167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:43:51.684127 2835167 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:43:51.684210 2835167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:43:51.727682 2835167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:43:51.736178 2835167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2635785.pem && ln -fs /usr/share/ca-certificates/2635785.pem /etc/ssl/certs/2635785.pem"
	I1121 14:43:51.744471 2835167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2635785.pem
	I1121 14:43:51.748815 2835167 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:03 /usr/share/ca-certificates/2635785.pem
	I1121 14:43:51.748886 2835167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2635785.pem
	I1121 14:43:51.790035 2835167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2635785.pem /etc/ssl/certs/51391683.0"
	I1121 14:43:51.798184 2835167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26357852.pem && ln -fs /usr/share/ca-certificates/26357852.pem /etc/ssl/certs/26357852.pem"
	I1121 14:43:51.806524 2835167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26357852.pem
	I1121 14:43:51.810447 2835167 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:03 /usr/share/ca-certificates/26357852.pem
	I1121 14:43:51.810539 2835167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26357852.pem
	I1121 14:43:51.851288 2835167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/26357852.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:43:51.859608 2835167 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:43:51.863193 2835167 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:43:51.863292 2835167 kubeadm.go:401] StartCluster: {Name:old-k8s-version-092258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-092258 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:43:51.863366 2835167 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1121 14:43:51.863446 2835167 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:43:51.891859 2835167 cri.go:89] found id: ""
	I1121 14:43:51.891941 2835167 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:43:51.900527 2835167 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:43:51.909151 2835167 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:43:51.909241 2835167 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:43:51.919737 2835167 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:43:51.919760 2835167 kubeadm.go:158] found existing configuration files:
	
	I1121 14:43:51.919826 2835167 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 14:43:51.928877 2835167 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:43:51.928996 2835167 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:43:51.936697 2835167 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 14:43:51.944973 2835167 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:43:51.945066 2835167 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:43:51.952856 2835167 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 14:43:51.960547 2835167 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:43:51.960686 2835167 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:43:51.968472 2835167 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 14:43:51.976805 2835167 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:43:51.976888 2835167 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:43:51.984345 2835167 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:43:52.080198 2835167 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1121 14:43:52.183959 2835167 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:44:07.478855 2835167 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1121 14:44:07.478915 2835167 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:44:07.479009 2835167 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:44:07.479066 2835167 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1121 14:44:07.479102 2835167 kubeadm.go:319] OS: Linux
	I1121 14:44:07.479150 2835167 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:44:07.479201 2835167 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1121 14:44:07.479250 2835167 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:44:07.479300 2835167 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:44:07.479351 2835167 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:44:07.479413 2835167 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:44:07.479461 2835167 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:44:07.479511 2835167 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:44:07.479559 2835167 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1121 14:44:07.479634 2835167 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:44:07.479732 2835167 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:44:07.479828 2835167 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1121 14:44:07.479893 2835167 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:44:07.482963 2835167 out.go:252]   - Generating certificates and keys ...
	I1121 14:44:07.483064 2835167 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:44:07.483132 2835167 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:44:07.483202 2835167 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:44:07.483261 2835167 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:44:07.483324 2835167 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:44:07.483377 2835167 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:44:07.483433 2835167 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:44:07.483564 2835167 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-092258] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1121 14:44:07.483619 2835167 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:44:07.483749 2835167 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-092258] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1121 14:44:07.483818 2835167 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:44:07.483886 2835167 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:44:07.483933 2835167 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:44:07.483992 2835167 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:44:07.484045 2835167 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:44:07.484101 2835167 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:44:07.484169 2835167 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:44:07.484226 2835167 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:44:07.484312 2835167 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:44:07.484381 2835167 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:44:07.487428 2835167 out.go:252]   - Booting up control plane ...
	I1121 14:44:07.487608 2835167 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:44:07.487710 2835167 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:44:07.487786 2835167 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:44:07.487905 2835167 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:44:07.488000 2835167 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:44:07.488044 2835167 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:44:07.488215 2835167 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1121 14:44:07.488300 2835167 kubeadm.go:319] [apiclient] All control plane components are healthy after 7.502585 seconds
	I1121 14:44:07.488418 2835167 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:44:07.488571 2835167 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:44:07.488637 2835167 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:44:07.488849 2835167 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-092258 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:44:07.488911 2835167 kubeadm.go:319] [bootstrap-token] Using token: szaotk.n52uxpmszzhbby9z
	I1121 14:44:07.491820 2835167 out.go:252]   - Configuring RBAC rules ...
	I1121 14:44:07.491949 2835167 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:44:07.492037 2835167 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:44:07.492184 2835167 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:44:07.492318 2835167 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:44:07.492450 2835167 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:44:07.492566 2835167 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:44:07.492691 2835167 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:44:07.492737 2835167 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:44:07.492785 2835167 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:44:07.492789 2835167 kubeadm.go:319] 
	I1121 14:44:07.492852 2835167 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:44:07.492857 2835167 kubeadm.go:319] 
	I1121 14:44:07.492938 2835167 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:44:07.492942 2835167 kubeadm.go:319] 
	I1121 14:44:07.492968 2835167 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:44:07.493047 2835167 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:44:07.493101 2835167 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:44:07.493105 2835167 kubeadm.go:319] 
	I1121 14:44:07.493162 2835167 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:44:07.493166 2835167 kubeadm.go:319] 
	I1121 14:44:07.493217 2835167 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:44:07.493221 2835167 kubeadm.go:319] 
	I1121 14:44:07.493276 2835167 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:44:07.493355 2835167 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:44:07.493428 2835167 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:44:07.493432 2835167 kubeadm.go:319] 
	I1121 14:44:07.493521 2835167 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:44:07.493601 2835167 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:44:07.493607 2835167 kubeadm.go:319] 
	I1121 14:44:07.493695 2835167 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token szaotk.n52uxpmszzhbby9z \
	I1121 14:44:07.493804 2835167 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d756a1c258e082bbc06f965046f24233900a8e069c2a9d29a764f0b68af739ae \
	I1121 14:44:07.493826 2835167 kubeadm.go:319] 	--control-plane 
	I1121 14:44:07.493830 2835167 kubeadm.go:319] 
	I1121 14:44:07.493920 2835167 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:44:07.493924 2835167 kubeadm.go:319] 
	I1121 14:44:07.494010 2835167 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token szaotk.n52uxpmszzhbby9z \
	I1121 14:44:07.494129 2835167 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d756a1c258e082bbc06f965046f24233900a8e069c2a9d29a764f0b68af739ae 
	I1121 14:44:07.494138 2835167 cni.go:84] Creating CNI manager for ""
	I1121 14:44:07.494145 2835167 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:44:07.497166 2835167 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:44:07.500216 2835167 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:44:07.505987 2835167 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1121 14:44:07.506006 2835167 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:44:07.546131 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:44:08.532445 2835167 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:44:08.532548 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:08.532605 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-092258 minikube.k8s.io/updated_at=2025_11_21T14_44_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=old-k8s-version-092258 minikube.k8s.io/primary=true
	I1121 14:44:08.674460 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:08.674568 2835167 ops.go:34] apiserver oom_adj: -16
	I1121 14:44:09.175394 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:09.675368 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:10.174563 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:10.675545 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:11.174568 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:11.675237 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:12.175098 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:12.675136 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:13.175409 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:13.674508 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:14.175483 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:14.674955 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:15.174638 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:15.674566 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:16.174919 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:16.674946 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:17.174624 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:17.675110 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:18.174609 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:18.674810 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:19.174819 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:19.675503 2835167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:44:19.793386 2835167 kubeadm.go:1114] duration metric: took 11.260904779s to wait for elevateKubeSystemPrivileges
	I1121 14:44:19.793428 2835167 kubeadm.go:403] duration metric: took 27.930140359s to StartCluster
	I1121 14:44:19.793447 2835167 settings.go:142] acquiring lock: {Name:mkd6064915932eca5a3b1d70feb4ec8240f340da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:44:19.793514 2835167 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-2633933/kubeconfig
	I1121 14:44:19.794554 2835167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/kubeconfig: {Name:mkd905aaf74d26e32c0b3e46a7edfbf13f4b98ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:44:19.794781 2835167 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:44:19.794907 2835167 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:44:19.795160 2835167 config.go:182] Loaded profile config "old-k8s-version-092258": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1121 14:44:19.795206 2835167 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:44:19.795273 2835167 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-092258"
	I1121 14:44:19.795287 2835167 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-092258"
	I1121 14:44:19.795308 2835167 host.go:66] Checking if "old-k8s-version-092258" exists ...
	I1121 14:44:19.795815 2835167 cli_runner.go:164] Run: docker container inspect old-k8s-version-092258 --format={{.State.Status}}
	I1121 14:44:19.795979 2835167 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-092258"
	I1121 14:44:19.795995 2835167 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-092258"
	I1121 14:44:19.796255 2835167 cli_runner.go:164] Run: docker container inspect old-k8s-version-092258 --format={{.State.Status}}
	I1121 14:44:19.798851 2835167 out.go:179] * Verifying Kubernetes components...
	I1121 14:44:19.806315 2835167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:44:19.841770 2835167 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-092258"
	I1121 14:44:19.841810 2835167 host.go:66] Checking if "old-k8s-version-092258" exists ...
	I1121 14:44:19.842215 2835167 cli_runner.go:164] Run: docker container inspect old-k8s-version-092258 --format={{.State.Status}}
	I1121 14:44:19.843021 2835167 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:44:19.845981 2835167 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:44:19.846003 2835167 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:44:19.846070 2835167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-092258
	I1121 14:44:19.880288 2835167 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:44:19.880310 2835167 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:44:19.880371 2835167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-092258
	I1121 14:44:19.888389 2835167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36720 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/old-k8s-version-092258/id_rsa Username:docker}
	I1121 14:44:19.916676 2835167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36720 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/old-k8s-version-092258/id_rsa Username:docker}
	I1121 14:44:20.244082 2835167 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:44:20.285170 2835167 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:44:20.285362 2835167 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:44:20.332070 2835167 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:44:21.059715 2835167 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-092258" to be "Ready" ...
	I1121 14:44:21.059828 2835167 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1121 14:44:21.566332 2835167 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-092258" context rescaled to 1 replicas
	I1121 14:44:21.605475 2835167 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.273369049s)
	I1121 14:44:21.608818 2835167 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1121 14:44:21.611888 2835167 addons.go:530] duration metric: took 1.816656544s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1121 14:44:23.063813 2835167 node_ready.go:57] node "old-k8s-version-092258" has "Ready":"False" status (will retry)
	W1121 14:44:25.563129 2835167 node_ready.go:57] node "old-k8s-version-092258" has "Ready":"False" status (will retry)
	W1121 14:44:27.564011 2835167 node_ready.go:57] node "old-k8s-version-092258" has "Ready":"False" status (will retry)
	W1121 14:44:30.063612 2835167 node_ready.go:57] node "old-k8s-version-092258" has "Ready":"False" status (will retry)
	W1121 14:44:32.562923 2835167 node_ready.go:57] node "old-k8s-version-092258" has "Ready":"False" status (will retry)
	I1121 14:44:33.563363 2835167 node_ready.go:49] node "old-k8s-version-092258" is "Ready"
	I1121 14:44:33.563395 2835167 node_ready.go:38] duration metric: took 12.503648731s for node "old-k8s-version-092258" to be "Ready" ...
	I1121 14:44:33.563409 2835167 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:44:33.563474 2835167 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:44:33.580001 2835167 api_server.go:72] duration metric: took 13.7851816s to wait for apiserver process to appear ...
	I1121 14:44:33.580026 2835167 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:44:33.580045 2835167 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:44:33.589120 2835167 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1121 14:44:33.590563 2835167 api_server.go:141] control plane version: v1.28.0
	I1121 14:44:33.590586 2835167 api_server.go:131] duration metric: took 10.553339ms to wait for apiserver health ...
	I1121 14:44:33.590594 2835167 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:44:33.595235 2835167 system_pods.go:59] 8 kube-system pods found
	I1121 14:44:33.595322 2835167 system_pods.go:61] "coredns-5dd5756b68-86stv" [6a48c3f2-f439-40e1-885b-5850f95d1ffc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:44:33.595346 2835167 system_pods.go:61] "etcd-old-k8s-version-092258" [bbb172b1-cd74-44e9-ba24-92155ea08be4] Running
	I1121 14:44:33.595390 2835167 system_pods.go:61] "kindnet-tfn5q" [6bec8380-6059-40d0-b0ed-6c3906f84591] Running
	I1121 14:44:33.595417 2835167 system_pods.go:61] "kube-apiserver-old-k8s-version-092258" [adf091a2-7b6d-4ba0-a537-9c5f7f93c471] Running
	I1121 14:44:33.595442 2835167 system_pods.go:61] "kube-controller-manager-old-k8s-version-092258" [01f77916-588d-469c-b175-3bbcdfe34ce8] Running
	I1121 14:44:33.595479 2835167 system_pods.go:61] "kube-proxy-tdwt5" [94e025a3-f19d-40ce-b6a6-9e2eb3b8f998] Running
	I1121 14:44:33.595506 2835167 system_pods.go:61] "kube-scheduler-old-k8s-version-092258" [7dfed185-93bd-4218-9c17-a6105d34022f] Running
	I1121 14:44:33.595532 2835167 system_pods.go:61] "storage-provisioner" [a31c361f-8fb6-4726-a554-e70884e4d16e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:44:33.595572 2835167 system_pods.go:74] duration metric: took 4.969827ms to wait for pod list to return data ...
	I1121 14:44:33.595601 2835167 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:44:33.599253 2835167 default_sa.go:45] found service account: "default"
	I1121 14:44:33.599325 2835167 default_sa.go:55] duration metric: took 3.703418ms for default service account to be created ...
	I1121 14:44:33.599363 2835167 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:44:33.603344 2835167 system_pods.go:86] 8 kube-system pods found
	I1121 14:44:33.603423 2835167 system_pods.go:89] "coredns-5dd5756b68-86stv" [6a48c3f2-f439-40e1-885b-5850f95d1ffc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:44:33.603457 2835167 system_pods.go:89] "etcd-old-k8s-version-092258" [bbb172b1-cd74-44e9-ba24-92155ea08be4] Running
	I1121 14:44:33.603486 2835167 system_pods.go:89] "kindnet-tfn5q" [6bec8380-6059-40d0-b0ed-6c3906f84591] Running
	I1121 14:44:33.603513 2835167 system_pods.go:89] "kube-apiserver-old-k8s-version-092258" [adf091a2-7b6d-4ba0-a537-9c5f7f93c471] Running
	I1121 14:44:33.603549 2835167 system_pods.go:89] "kube-controller-manager-old-k8s-version-092258" [01f77916-588d-469c-b175-3bbcdfe34ce8] Running
	I1121 14:44:33.603576 2835167 system_pods.go:89] "kube-proxy-tdwt5" [94e025a3-f19d-40ce-b6a6-9e2eb3b8f998] Running
	I1121 14:44:33.603600 2835167 system_pods.go:89] "kube-scheduler-old-k8s-version-092258" [7dfed185-93bd-4218-9c17-a6105d34022f] Running
	I1121 14:44:33.603640 2835167 system_pods.go:89] "storage-provisioner" [a31c361f-8fb6-4726-a554-e70884e4d16e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:44:33.603680 2835167 retry.go:31] will retry after 248.130267ms: missing components: kube-dns
	I1121 14:44:33.863548 2835167 system_pods.go:86] 8 kube-system pods found
	I1121 14:44:33.863646 2835167 system_pods.go:89] "coredns-5dd5756b68-86stv" [6a48c3f2-f439-40e1-885b-5850f95d1ffc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:44:33.863677 2835167 system_pods.go:89] "etcd-old-k8s-version-092258" [bbb172b1-cd74-44e9-ba24-92155ea08be4] Running
	I1121 14:44:33.863699 2835167 system_pods.go:89] "kindnet-tfn5q" [6bec8380-6059-40d0-b0ed-6c3906f84591] Running
	I1121 14:44:33.863735 2835167 system_pods.go:89] "kube-apiserver-old-k8s-version-092258" [adf091a2-7b6d-4ba0-a537-9c5f7f93c471] Running
	I1121 14:44:33.863762 2835167 system_pods.go:89] "kube-controller-manager-old-k8s-version-092258" [01f77916-588d-469c-b175-3bbcdfe34ce8] Running
	I1121 14:44:33.863787 2835167 system_pods.go:89] "kube-proxy-tdwt5" [94e025a3-f19d-40ce-b6a6-9e2eb3b8f998] Running
	I1121 14:44:33.863827 2835167 system_pods.go:89] "kube-scheduler-old-k8s-version-092258" [7dfed185-93bd-4218-9c17-a6105d34022f] Running
	I1121 14:44:33.863857 2835167 system_pods.go:89] "storage-provisioner" [a31c361f-8fb6-4726-a554-e70884e4d16e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:44:33.863904 2835167 retry.go:31] will retry after 379.807267ms: missing components: kube-dns
	I1121 14:44:34.248297 2835167 system_pods.go:86] 8 kube-system pods found
	I1121 14:44:34.248331 2835167 system_pods.go:89] "coredns-5dd5756b68-86stv" [6a48c3f2-f439-40e1-885b-5850f95d1ffc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:44:34.248338 2835167 system_pods.go:89] "etcd-old-k8s-version-092258" [bbb172b1-cd74-44e9-ba24-92155ea08be4] Running
	I1121 14:44:34.248344 2835167 system_pods.go:89] "kindnet-tfn5q" [6bec8380-6059-40d0-b0ed-6c3906f84591] Running
	I1121 14:44:34.248348 2835167 system_pods.go:89] "kube-apiserver-old-k8s-version-092258" [adf091a2-7b6d-4ba0-a537-9c5f7f93c471] Running
	I1121 14:44:34.248352 2835167 system_pods.go:89] "kube-controller-manager-old-k8s-version-092258" [01f77916-588d-469c-b175-3bbcdfe34ce8] Running
	I1121 14:44:34.248356 2835167 system_pods.go:89] "kube-proxy-tdwt5" [94e025a3-f19d-40ce-b6a6-9e2eb3b8f998] Running
	I1121 14:44:34.248360 2835167 system_pods.go:89] "kube-scheduler-old-k8s-version-092258" [7dfed185-93bd-4218-9c17-a6105d34022f] Running
	I1121 14:44:34.248365 2835167 system_pods.go:89] "storage-provisioner" [a31c361f-8fb6-4726-a554-e70884e4d16e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:44:34.248380 2835167 retry.go:31] will retry after 418.10052ms: missing components: kube-dns
	I1121 14:44:34.670581 2835167 system_pods.go:86] 8 kube-system pods found
	I1121 14:44:34.670670 2835167 system_pods.go:89] "coredns-5dd5756b68-86stv" [6a48c3f2-f439-40e1-885b-5850f95d1ffc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:44:34.670687 2835167 system_pods.go:89] "etcd-old-k8s-version-092258" [bbb172b1-cd74-44e9-ba24-92155ea08be4] Running
	I1121 14:44:34.670694 2835167 system_pods.go:89] "kindnet-tfn5q" [6bec8380-6059-40d0-b0ed-6c3906f84591] Running
	I1121 14:44:34.670698 2835167 system_pods.go:89] "kube-apiserver-old-k8s-version-092258" [adf091a2-7b6d-4ba0-a537-9c5f7f93c471] Running
	I1121 14:44:34.670703 2835167 system_pods.go:89] "kube-controller-manager-old-k8s-version-092258" [01f77916-588d-469c-b175-3bbcdfe34ce8] Running
	I1121 14:44:34.670707 2835167 system_pods.go:89] "kube-proxy-tdwt5" [94e025a3-f19d-40ce-b6a6-9e2eb3b8f998] Running
	I1121 14:44:34.670711 2835167 system_pods.go:89] "kube-scheduler-old-k8s-version-092258" [7dfed185-93bd-4218-9c17-a6105d34022f] Running
	I1121 14:44:34.670736 2835167 system_pods.go:89] "storage-provisioner" [a31c361f-8fb6-4726-a554-e70884e4d16e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:44:34.670759 2835167 retry.go:31] will retry after 454.42102ms: missing components: kube-dns
	I1121 14:44:35.130522 2835167 system_pods.go:86] 8 kube-system pods found
	I1121 14:44:35.130555 2835167 system_pods.go:89] "coredns-5dd5756b68-86stv" [6a48c3f2-f439-40e1-885b-5850f95d1ffc] Running
	I1121 14:44:35.130563 2835167 system_pods.go:89] "etcd-old-k8s-version-092258" [bbb172b1-cd74-44e9-ba24-92155ea08be4] Running
	I1121 14:44:35.130568 2835167 system_pods.go:89] "kindnet-tfn5q" [6bec8380-6059-40d0-b0ed-6c3906f84591] Running
	I1121 14:44:35.130573 2835167 system_pods.go:89] "kube-apiserver-old-k8s-version-092258" [adf091a2-7b6d-4ba0-a537-9c5f7f93c471] Running
	I1121 14:44:35.130579 2835167 system_pods.go:89] "kube-controller-manager-old-k8s-version-092258" [01f77916-588d-469c-b175-3bbcdfe34ce8] Running
	I1121 14:44:35.130582 2835167 system_pods.go:89] "kube-proxy-tdwt5" [94e025a3-f19d-40ce-b6a6-9e2eb3b8f998] Running
	I1121 14:44:35.130586 2835167 system_pods.go:89] "kube-scheduler-old-k8s-version-092258" [7dfed185-93bd-4218-9c17-a6105d34022f] Running
	I1121 14:44:35.130590 2835167 system_pods.go:89] "storage-provisioner" [a31c361f-8fb6-4726-a554-e70884e4d16e] Running
	I1121 14:44:35.130598 2835167 system_pods.go:126] duration metric: took 1.531191935s to wait for k8s-apps to be running ...
	I1121 14:44:35.130606 2835167 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:44:35.130663 2835167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:44:35.145395 2835167 system_svc.go:56] duration metric: took 14.776546ms WaitForService to wait for kubelet
	I1121 14:44:35.145455 2835167 kubeadm.go:587] duration metric: took 15.350619907s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:44:35.145475 2835167 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:44:35.148334 2835167 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 14:44:35.148369 2835167 node_conditions.go:123] node cpu capacity is 2
	I1121 14:44:35.148382 2835167 node_conditions.go:105] duration metric: took 2.896581ms to run NodePressure ...
	I1121 14:44:35.148393 2835167 start.go:242] waiting for startup goroutines ...
	I1121 14:44:35.148401 2835167 start.go:247] waiting for cluster config update ...
	I1121 14:44:35.148412 2835167 start.go:256] writing updated cluster config ...
	I1121 14:44:35.148743 2835167 ssh_runner.go:195] Run: rm -f paused
	I1121 14:44:35.152681 2835167 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:44:35.157000 2835167 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-86stv" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:44:35.162556 2835167 pod_ready.go:94] pod "coredns-5dd5756b68-86stv" is "Ready"
	I1121 14:44:35.162601 2835167 pod_ready.go:86] duration metric: took 5.502719ms for pod "coredns-5dd5756b68-86stv" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:44:35.166472 2835167 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-092258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:44:35.171935 2835167 pod_ready.go:94] pod "etcd-old-k8s-version-092258" is "Ready"
	I1121 14:44:35.171965 2835167 pod_ready.go:86] duration metric: took 5.463835ms for pod "etcd-old-k8s-version-092258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:44:35.175582 2835167 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-092258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:44:35.181518 2835167 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-092258" is "Ready"
	I1121 14:44:35.181551 2835167 pod_ready.go:86] duration metric: took 5.941771ms for pod "kube-apiserver-old-k8s-version-092258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:44:35.184926 2835167 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-092258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:44:35.557460 2835167 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-092258" is "Ready"
	I1121 14:44:35.557489 2835167 pod_ready.go:86] duration metric: took 372.537001ms for pod "kube-controller-manager-old-k8s-version-092258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:44:35.757817 2835167 pod_ready.go:83] waiting for pod "kube-proxy-tdwt5" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:44:36.157592 2835167 pod_ready.go:94] pod "kube-proxy-tdwt5" is "Ready"
	I1121 14:44:36.157618 2835167 pod_ready.go:86] duration metric: took 399.771111ms for pod "kube-proxy-tdwt5" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:44:36.357529 2835167 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-092258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:44:36.757566 2835167 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-092258" is "Ready"
	I1121 14:44:36.757596 2835167 pod_ready.go:86] duration metric: took 400.036784ms for pod "kube-scheduler-old-k8s-version-092258" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:44:36.757610 2835167 pod_ready.go:40] duration metric: took 1.604896006s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:44:36.818445 2835167 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1121 14:44:36.821296 2835167 out.go:203] 
	W1121 14:44:36.824281 2835167 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1121 14:44:36.827383 2835167 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1121 14:44:36.830301 2835167 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-092258" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	ab7b2c1339a58       1611cd07b61d5       9 seconds ago       Running             busybox                   0                   befa3559e32d9       busybox                                          default
	4fa0544fe52cc       97e04611ad434       15 seconds ago      Running             coredns                   0                   b58b59f73a24b       coredns-5dd5756b68-86stv                         kube-system
	c6ace07879b84       ba04bb24b9575       15 seconds ago      Running             storage-provisioner       0                   3680e435bb193       storage-provisioner                              kube-system
	495595ef81ee7       b1a8c6f707935       26 seconds ago      Running             kindnet-cni               0                   f4ddede8f051f       kindnet-tfn5q                                    kube-system
	630ebb9fe56a1       940f54a5bcae9       28 seconds ago      Running             kube-proxy                0                   1812faa70a69a       kube-proxy-tdwt5                                 kube-system
	331a280f7d8fb       46cc66ccc7c19       48 seconds ago      Running             kube-controller-manager   0                   9d7554dad7608       kube-controller-manager-old-k8s-version-092258   kube-system
	46391c1bd1fc7       762dce4090c5f       48 seconds ago      Running             kube-scheduler            0                   88bf0a72d6a98       kube-scheduler-old-k8s-version-092258            kube-system
	32a76684e0ad4       9cdd6470f48c8       48 seconds ago      Running             etcd                      0                   edaf6d16372ae       etcd-old-k8s-version-092258                      kube-system
	2e1cd1261e99f       00543d2fe5d71       49 seconds ago      Running             kube-apiserver            0                   58f4b63de6fd5       kube-apiserver-old-k8s-version-092258            kube-system
	
	
	==> containerd <==
	Nov 21 14:44:33 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:33.681687355Z" level=info msg="CreateContainer within sandbox \"3680e435bb193d749f6cac5ee0a23ca21a777ba606c46a9f454cb42ef4060e47\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"c6ace07879b84e705bc8b532f8cd9162404b63746ad9faeae44e245e26539ead\""
	Nov 21 14:44:33 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:33.685318192Z" level=info msg="StartContainer for \"c6ace07879b84e705bc8b532f8cd9162404b63746ad9faeae44e245e26539ead\""
	Nov 21 14:44:33 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:33.689247614Z" level=info msg="connecting to shim c6ace07879b84e705bc8b532f8cd9162404b63746ad9faeae44e245e26539ead" address="unix:///run/containerd/s/6e69ecb1899b9e75727f8fe7f211e1f82d40f965205bb1565eeae343c2bafd56" protocol=ttrpc version=3
	Nov 21 14:44:33 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:33.690732150Z" level=info msg="CreateContainer within sandbox \"b58b59f73a24bb52a5f6c210ec1d0dfbddbbc55dbc0fd609423879994aa0b8ea\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4fa0544fe52cc0b0b57fcb28182f1f20dc7c79b3ef53dcb6dc677efecd5a9cc9\""
	Nov 21 14:44:33 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:33.693417924Z" level=info msg="StartContainer for \"4fa0544fe52cc0b0b57fcb28182f1f20dc7c79b3ef53dcb6dc677efecd5a9cc9\""
	Nov 21 14:44:33 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:33.696432221Z" level=info msg="connecting to shim 4fa0544fe52cc0b0b57fcb28182f1f20dc7c79b3ef53dcb6dc677efecd5a9cc9" address="unix:///run/containerd/s/a5f36c12d3eba8a08addb4ff6f6c45f4b1f35adc7b831563646c8ea27992d003" protocol=ttrpc version=3
	Nov 21 14:44:33 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:33.785690082Z" level=info msg="StartContainer for \"4fa0544fe52cc0b0b57fcb28182f1f20dc7c79b3ef53dcb6dc677efecd5a9cc9\" returns successfully"
	Nov 21 14:44:33 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:33.815503407Z" level=info msg="StartContainer for \"c6ace07879b84e705bc8b532f8cd9162404b63746ad9faeae44e245e26539ead\" returns successfully"
	Nov 21 14:44:37 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:37.378935963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:4fd396a4-7f86-4bac-b99a-f7427bb5deb9,Namespace:default,Attempt:0,}"
	Nov 21 14:44:37 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:37.481124358Z" level=info msg="connecting to shim befa3559e32d903c1abf0bc725ae5f12a26cdbb8b3fb4a57980282d9931d9d26" address="unix:///run/containerd/s/71dcf6bf5df9beb4a3d248e771df5a382c0db1f3a2b82a021424cdeb0bc07ccb" namespace=k8s.io protocol=ttrpc version=3
	Nov 21 14:44:37 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:37.544176700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:4fd396a4-7f86-4bac-b99a-f7427bb5deb9,Namespace:default,Attempt:0,} returns sandbox id \"befa3559e32d903c1abf0bc725ae5f12a26cdbb8b3fb4a57980282d9931d9d26\""
	Nov 21 14:44:37 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:37.546307355Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 21 14:44:39 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:39.885153902Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:44:39 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:39.887206340Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937188"
	Nov 21 14:44:39 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:39.889567142Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:44:39 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:39.893770222Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:44:39 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:39.894536985Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.348184988s"
	Nov 21 14:44:39 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:39.894577862Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 21 14:44:39 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:39.898592935Z" level=info msg="CreateContainer within sandbox \"befa3559e32d903c1abf0bc725ae5f12a26cdbb8b3fb4a57980282d9931d9d26\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 21 14:44:39 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:39.912721761Z" level=info msg="Container ab7b2c1339a58ca880ca0312fd5f7d62085c7261261bf3758b721a01af22d534: CDI devices from CRI Config.CDIDevices: []"
	Nov 21 14:44:39 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:39.926024035Z" level=info msg="CreateContainer within sandbox \"befa3559e32d903c1abf0bc725ae5f12a26cdbb8b3fb4a57980282d9931d9d26\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"ab7b2c1339a58ca880ca0312fd5f7d62085c7261261bf3758b721a01af22d534\""
	Nov 21 14:44:39 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:39.927048695Z" level=info msg="StartContainer for \"ab7b2c1339a58ca880ca0312fd5f7d62085c7261261bf3758b721a01af22d534\""
	Nov 21 14:44:39 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:39.928757405Z" level=info msg="connecting to shim ab7b2c1339a58ca880ca0312fd5f7d62085c7261261bf3758b721a01af22d534" address="unix:///run/containerd/s/71dcf6bf5df9beb4a3d248e771df5a382c0db1f3a2b82a021424cdeb0bc07ccb" protocol=ttrpc version=3
	Nov 21 14:44:39 old-k8s-version-092258 containerd[760]: time="2025-11-21T14:44:39.996819622Z" level=info msg="StartContainer for \"ab7b2c1339a58ca880ca0312fd5f7d62085c7261261bf3758b721a01af22d534\" returns successfully"
	Nov 21 14:44:46 old-k8s-version-092258 containerd[760]: E1121 14:44:46.197863     760 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [4fa0544fe52cc0b0b57fcb28182f1f20dc7c79b3ef53dcb6dc677efecd5a9cc9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36979 - 27014 "HINFO IN 2294269810657567619.5005884824654199478. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03500164s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-092258
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-092258
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=old-k8s-version-092258
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_44_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:44:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-092258
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:44:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:44:37 +0000   Fri, 21 Nov 2025 14:44:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:44:37 +0000   Fri, 21 Nov 2025 14:44:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:44:37 +0000   Fri, 21 Nov 2025 14:44:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:44:37 +0000   Fri, 21 Nov 2025 14:44:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-092258
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                9e4fe947-6f95-4914-9cd3-ccd713480a21
	  Boot ID:                    41b0e09d-5a9a-49c9-8980-dca608ba3fce
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-5dd5756b68-86stv                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-old-k8s-version-092258                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         42s
	  kube-system                 kindnet-tfn5q                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-old-k8s-version-092258             250m (12%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-controller-manager-old-k8s-version-092258    200m (10%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-proxy-tdwt5                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-old-k8s-version-092258             100m (5%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 28s   kube-proxy       
	  Normal  Starting                 42s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  42s   kubelet          Node old-k8s-version-092258 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s   kubelet          Node old-k8s-version-092258 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s   kubelet          Node old-k8s-version-092258 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  42s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           30s   node-controller  Node old-k8s-version-092258 event: Registered Node old-k8s-version-092258 in Controller
	  Normal  NodeReady                16s   kubelet          Node old-k8s-version-092258 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 13:02] overlayfs: idmapped layers are currently not supported
	[Nov21 13:03] overlayfs: idmapped layers are currently not supported
	[Nov21 13:06] overlayfs: idmapped layers are currently not supported
	[Nov21 13:08] overlayfs: idmapped layers are currently not supported
	[Nov21 13:09] overlayfs: idmapped layers are currently not supported
	[Nov21 13:10] overlayfs: idmapped layers are currently not supported
	[ +19.808801] overlayfs: idmapped layers are currently not supported
	[Nov21 13:11] overlayfs: idmapped layers are currently not supported
	[Nov21 13:12] overlayfs: idmapped layers are currently not supported
	[Nov21 13:13] overlayfs: idmapped layers are currently not supported
	[Nov21 13:14] overlayfs: idmapped layers are currently not supported
	[Nov21 13:15] overlayfs: idmapped layers are currently not supported
	[ +16.772572] overlayfs: idmapped layers are currently not supported
	[Nov21 13:16] overlayfs: idmapped layers are currently not supported
	[Nov21 13:17] overlayfs: idmapped layers are currently not supported
	[ +27.396777] overlayfs: idmapped layers are currently not supported
	[Nov21 13:18] overlayfs: idmapped layers are currently not supported
	[ +25.430119] overlayfs: idmapped layers are currently not supported
	[Nov21 13:19] overlayfs: idmapped layers are currently not supported
	[Nov21 13:20] overlayfs: idmapped layers are currently not supported
	[Nov21 13:21] overlayfs: idmapped layers are currently not supported
	[Nov21 13:22] overlayfs: idmapped layers are currently not supported
	[Nov21 13:23] overlayfs: idmapped layers are currently not supported
	[Nov21 13:24] overlayfs: idmapped layers are currently not supported
	[Nov21 13:55] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [32a76684e0ad48afa24dffa56bbd612225875cea5526f2fe91da5620cdd3737e] <==
	{"level":"info","ts":"2025-11-21T14:44:00.857351Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-21T14:44:00.860773Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-21T14:44:00.860806Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-21T14:44:00.857529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-21T14:44:00.861198Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-21T14:44:00.857558Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-21T14:44:00.857577Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-21T14:44:01.029245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-21T14:44:01.029466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-21T14:44:01.02957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-11-21T14:44:01.029683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-11-21T14:44:01.029774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-21T14:44:01.029858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-11-21T14:44:01.029946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-21T14:44:01.033213Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-092258 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-21T14:44:01.033415Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-21T14:44:01.034523Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-21T14:44:01.034765Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:44:01.035192Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-21T14:44:01.036885Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:44:01.04112Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:44:01.041303Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:44:01.055053Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-21T14:44:01.060057Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-21T14:44:01.060254Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 14:44:49 up 19:27,  0 user,  load average: 2.25, 3.09, 2.75
	Linux old-k8s-version-092258 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [495595ef81ee7d983a4b62890080114a468713ef14bf361720fb1ef51e30f35d] <==
	I1121 14:44:22.827794       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:44:22.828022       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1121 14:44:22.828147       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:44:22.828164       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:44:22.828175       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:44:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:44:23.024438       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:44:23.024516       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:44:23.024545       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:44:23.025736       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1121 14:44:23.224664       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:44:23.224747       1 metrics.go:72] Registering metrics
	I1121 14:44:23.224843       1 controller.go:711] "Syncing nftables rules"
	I1121 14:44:33.032002       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1121 14:44:33.032055       1 main.go:301] handling current node
	I1121 14:44:43.024839       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1121 14:44:43.024870       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2e1cd1261e99f5cf421f076a966eedd90258d75cd1735ec5e4bc9ae1d5576945] <==
	I1121 14:44:04.361764       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1121 14:44:04.361814       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1121 14:44:04.367613       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1121 14:44:04.367684       1 aggregator.go:166] initial CRD sync complete...
	I1121 14:44:04.367697       1 autoregister_controller.go:141] Starting autoregister controller
	I1121 14:44:04.367705       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 14:44:04.367714       1 cache.go:39] Caches are synced for autoregister controller
	I1121 14:44:04.374026       1 controller.go:624] quota admission added evaluator for: namespaces
	I1121 14:44:04.403168       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:44:04.422675       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1121 14:44:04.968994       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1121 14:44:04.974442       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1121 14:44:04.974470       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:44:05.722694       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:44:05.789801       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:44:05.889316       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1121 14:44:05.896471       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1121 14:44:05.897760       1 controller.go:624] quota admission added evaluator for: endpoints
	I1121 14:44:05.902883       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:44:06.203440       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1121 14:44:07.370770       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1121 14:44:07.383830       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1121 14:44:07.398248       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1121 14:44:19.834381       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1121 14:44:20.024025       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [331a280f7d8fb0893c46a22085825d84571038b23952dd64524b062bc7f08b74] <==
	I1121 14:44:19.212645       1 shared_informer.go:318] Caches are synced for endpoint
	I1121 14:44:19.212738       1 shared_informer.go:318] Caches are synced for HPA
	I1121 14:44:19.212773       1 shared_informer.go:318] Caches are synced for disruption
	I1121 14:44:19.212803       1 shared_informer.go:318] Caches are synced for attach detach
	I1121 14:44:19.627294       1 shared_informer.go:318] Caches are synced for garbage collector
	I1121 14:44:19.659367       1 shared_informer.go:318] Caches are synced for garbage collector
	I1121 14:44:19.659575       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1121 14:44:19.901701       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-tdwt5"
	I1121 14:44:19.929907       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-tfn5q"
	I1121 14:44:20.044696       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1121 14:44:20.152846       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-v7mnp"
	I1121 14:44:20.183528       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-86stv"
	I1121 14:44:20.224114       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="183.103851ms"
	I1121 14:44:20.243991       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.819807ms"
	I1121 14:44:20.244109       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.6µs"
	I1121 14:44:21.107620       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1121 14:44:21.134973       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-v7mnp"
	I1121 14:44:21.155140       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="49.168473ms"
	I1121 14:44:21.171976       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.779877ms"
	I1121 14:44:21.172179       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.497µs"
	I1121 14:44:33.168291       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.391µs"
	I1121 14:44:33.193460       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="100.346µs"
	I1121 14:44:34.128063       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1121 14:44:34.848411       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.00588ms"
	I1121 14:44:34.848685       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="129.432µs"
	
	
	==> kube-proxy [630ebb9fe56a1bea1ef2dfe24de2086594eb0afbdaf547e41ce7c777d9eb7705] <==
	I1121 14:44:20.860188       1 server_others.go:69] "Using iptables proxy"
	I1121 14:44:20.878393       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1121 14:44:20.931156       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:44:20.936886       1 server_others.go:152] "Using iptables Proxier"
	I1121 14:44:20.936939       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1121 14:44:20.936948       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1121 14:44:20.936971       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1121 14:44:20.937761       1 server.go:846] "Version info" version="v1.28.0"
	I1121 14:44:20.937784       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:44:20.938544       1 config.go:188] "Starting service config controller"
	I1121 14:44:20.938593       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1121 14:44:20.938625       1 config.go:97] "Starting endpoint slice config controller"
	I1121 14:44:20.938635       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1121 14:44:20.940292       1 config.go:315] "Starting node config controller"
	I1121 14:44:20.940306       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1121 14:44:21.040184       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1121 14:44:21.040242       1 shared_informer.go:318] Caches are synced for service config
	I1121 14:44:21.040487       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [46391c1bd1fc737d22bd847c1d63f9bd14e4d892ef33d465e9204dc377dd6002] <==
	W1121 14:44:04.821909       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1121 14:44:04.822028       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1121 14:44:04.822197       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1121 14:44:04.822220       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1121 14:44:04.824601       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1121 14:44:04.825484       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1121 14:44:04.824774       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1121 14:44:04.825902       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1121 14:44:04.826065       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1121 14:44:04.826044       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1121 14:44:04.824991       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1121 14:44:04.826420       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1121 14:44:04.825212       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1121 14:44:04.825284       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1121 14:44:04.825347       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1121 14:44:04.825382       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1121 14:44:04.825431       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1121 14:44:04.824928       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1121 14:44:04.826802       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1121 14:44:04.826945       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1121 14:44:04.827063       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1121 14:44:04.827207       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1121 14:44:04.827355       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1121 14:44:04.827495       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1121 14:44:06.304765       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 21 14:44:19 old-k8s-version-092258 kubelet[1526]: I1121 14:44:19.149863    1526 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 21 14:44:19 old-k8s-version-092258 kubelet[1526]: I1121 14:44:19.150498    1526 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 21 14:44:19 old-k8s-version-092258 kubelet[1526]: I1121 14:44:19.918591    1526 topology_manager.go:215] "Topology Admit Handler" podUID="94e025a3-f19d-40ce-b6a6-9e2eb3b8f998" podNamespace="kube-system" podName="kube-proxy-tdwt5"
	Nov 21 14:44:19 old-k8s-version-092258 kubelet[1526]: I1121 14:44:19.954210    1526 topology_manager.go:215] "Topology Admit Handler" podUID="6bec8380-6059-40d0-b0ed-6c3906f84591" podNamespace="kube-system" podName="kindnet-tfn5q"
	Nov 21 14:44:19 old-k8s-version-092258 kubelet[1526]: I1121 14:44:19.980360    1526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/94e025a3-f19d-40ce-b6a6-9e2eb3b8f998-kube-proxy\") pod \"kube-proxy-tdwt5\" (UID: \"94e025a3-f19d-40ce-b6a6-9e2eb3b8f998\") " pod="kube-system/kube-proxy-tdwt5"
	Nov 21 14:44:19 old-k8s-version-092258 kubelet[1526]: I1121 14:44:19.980619    1526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94e025a3-f19d-40ce-b6a6-9e2eb3b8f998-xtables-lock\") pod \"kube-proxy-tdwt5\" (UID: \"94e025a3-f19d-40ce-b6a6-9e2eb3b8f998\") " pod="kube-system/kube-proxy-tdwt5"
	Nov 21 14:44:19 old-k8s-version-092258 kubelet[1526]: I1121 14:44:19.980760    1526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94e025a3-f19d-40ce-b6a6-9e2eb3b8f998-lib-modules\") pod \"kube-proxy-tdwt5\" (UID: \"94e025a3-f19d-40ce-b6a6-9e2eb3b8f998\") " pod="kube-system/kube-proxy-tdwt5"
	Nov 21 14:44:19 old-k8s-version-092258 kubelet[1526]: I1121 14:44:19.980886    1526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6bec8380-6059-40d0-b0ed-6c3906f84591-cni-cfg\") pod \"kindnet-tfn5q\" (UID: \"6bec8380-6059-40d0-b0ed-6c3906f84591\") " pod="kube-system/kindnet-tfn5q"
	Nov 21 14:44:19 old-k8s-version-092258 kubelet[1526]: I1121 14:44:19.981004    1526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6bec8380-6059-40d0-b0ed-6c3906f84591-xtables-lock\") pod \"kindnet-tfn5q\" (UID: \"6bec8380-6059-40d0-b0ed-6c3906f84591\") " pod="kube-system/kindnet-tfn5q"
	Nov 21 14:44:19 old-k8s-version-092258 kubelet[1526]: I1121 14:44:19.981145    1526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6bec8380-6059-40d0-b0ed-6c3906f84591-lib-modules\") pod \"kindnet-tfn5q\" (UID: \"6bec8380-6059-40d0-b0ed-6c3906f84591\") " pod="kube-system/kindnet-tfn5q"
	Nov 21 14:44:19 old-k8s-version-092258 kubelet[1526]: I1121 14:44:19.981319    1526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5lf7\" (UniqueName: \"kubernetes.io/projected/6bec8380-6059-40d0-b0ed-6c3906f84591-kube-api-access-m5lf7\") pod \"kindnet-tfn5q\" (UID: \"6bec8380-6059-40d0-b0ed-6c3906f84591\") " pod="kube-system/kindnet-tfn5q"
	Nov 21 14:44:19 old-k8s-version-092258 kubelet[1526]: I1121 14:44:19.981442    1526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2rxs\" (UniqueName: \"kubernetes.io/projected/94e025a3-f19d-40ce-b6a6-9e2eb3b8f998-kube-api-access-g2rxs\") pod \"kube-proxy-tdwt5\" (UID: \"94e025a3-f19d-40ce-b6a6-9e2eb3b8f998\") " pod="kube-system/kube-proxy-tdwt5"
	Nov 21 14:44:22 old-k8s-version-092258 kubelet[1526]: I1121 14:44:22.794618    1526 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-tdwt5" podStartSLOduration=3.794572825 podCreationTimestamp="2025-11-21 14:44:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:44:21.775006119 +0000 UTC m=+14.440254134" watchObservedRunningTime="2025-11-21 14:44:22.794572825 +0000 UTC m=+15.459820816"
	Nov 21 14:44:33 old-k8s-version-092258 kubelet[1526]: I1121 14:44:33.136665    1526 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 21 14:44:33 old-k8s-version-092258 kubelet[1526]: I1121 14:44:33.168328    1526 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-tfn5q" podStartSLOduration=12.176504309 podCreationTimestamp="2025-11-21 14:44:19 +0000 UTC" firstStartedPulling="2025-11-21 14:44:20.481898213 +0000 UTC m=+13.147146196" lastFinishedPulling="2025-11-21 14:44:22.473675721 +0000 UTC m=+15.138923704" observedRunningTime="2025-11-21 14:44:22.795889554 +0000 UTC m=+15.461137546" watchObservedRunningTime="2025-11-21 14:44:33.168281817 +0000 UTC m=+25.833529808"
	Nov 21 14:44:33 old-k8s-version-092258 kubelet[1526]: I1121 14:44:33.168810    1526 topology_manager.go:215] "Topology Admit Handler" podUID="6a48c3f2-f439-40e1-885b-5850f95d1ffc" podNamespace="kube-system" podName="coredns-5dd5756b68-86stv"
	Nov 21 14:44:33 old-k8s-version-092258 kubelet[1526]: I1121 14:44:33.174944    1526 topology_manager.go:215] "Topology Admit Handler" podUID="a31c361f-8fb6-4726-a554-e70884e4d16e" podNamespace="kube-system" podName="storage-provisioner"
	Nov 21 14:44:33 old-k8s-version-092258 kubelet[1526]: I1121 14:44:33.200360    1526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktnhs\" (UniqueName: \"kubernetes.io/projected/6a48c3f2-f439-40e1-885b-5850f95d1ffc-kube-api-access-ktnhs\") pod \"coredns-5dd5756b68-86stv\" (UID: \"6a48c3f2-f439-40e1-885b-5850f95d1ffc\") " pod="kube-system/coredns-5dd5756b68-86stv"
	Nov 21 14:44:33 old-k8s-version-092258 kubelet[1526]: I1121 14:44:33.200594    1526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a31c361f-8fb6-4726-a554-e70884e4d16e-tmp\") pod \"storage-provisioner\" (UID: \"a31c361f-8fb6-4726-a554-e70884e4d16e\") " pod="kube-system/storage-provisioner"
	Nov 21 14:44:33 old-k8s-version-092258 kubelet[1526]: I1121 14:44:33.200711    1526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxf7g\" (UniqueName: \"kubernetes.io/projected/a31c361f-8fb6-4726-a554-e70884e4d16e-kube-api-access-xxf7g\") pod \"storage-provisioner\" (UID: \"a31c361f-8fb6-4726-a554-e70884e4d16e\") " pod="kube-system/storage-provisioner"
	Nov 21 14:44:33 old-k8s-version-092258 kubelet[1526]: I1121 14:44:33.200832    1526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a48c3f2-f439-40e1-885b-5850f95d1ffc-config-volume\") pod \"coredns-5dd5756b68-86stv\" (UID: \"6a48c3f2-f439-40e1-885b-5850f95d1ffc\") " pod="kube-system/coredns-5dd5756b68-86stv"
	Nov 21 14:44:34 old-k8s-version-092258 kubelet[1526]: I1121 14:44:34.812385    1526 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.812339422 podCreationTimestamp="2025-11-21 14:44:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:44:34.811999567 +0000 UTC m=+27.477247550" watchObservedRunningTime="2025-11-21 14:44:34.812339422 +0000 UTC m=+27.477587405"
	Nov 21 14:44:34 old-k8s-version-092258 kubelet[1526]: I1121 14:44:34.830835    1526 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-86stv" podStartSLOduration=14.83078559 podCreationTimestamp="2025-11-21 14:44:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:44:34.830509339 +0000 UTC m=+27.495757330" watchObservedRunningTime="2025-11-21 14:44:34.83078559 +0000 UTC m=+27.496033581"
	Nov 21 14:44:37 old-k8s-version-092258 kubelet[1526]: I1121 14:44:37.064261    1526 topology_manager.go:215] "Topology Admit Handler" podUID="4fd396a4-7f86-4bac-b99a-f7427bb5deb9" podNamespace="default" podName="busybox"
	Nov 21 14:44:37 old-k8s-version-092258 kubelet[1526]: I1121 14:44:37.128201    1526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmbgq\" (UniqueName: \"kubernetes.io/projected/4fd396a4-7f86-4bac-b99a-f7427bb5deb9-kube-api-access-tmbgq\") pod \"busybox\" (UID: \"4fd396a4-7f86-4bac-b99a-f7427bb5deb9\") " pod="default/busybox"
	
	
	==> storage-provisioner [c6ace07879b84e705bc8b532f8cd9162404b63746ad9faeae44e245e26539ead] <==
	I1121 14:44:33.821827       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1121 14:44:33.835269       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 14:44:33.835522       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1121 14:44:33.844745       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 14:44:33.845108       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-092258_43824c0e-5444-4d63-9465-8f0bcb9e3d2b!
	I1121 14:44:33.845246       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d6d6cfaa-85d7-41d0-9ba2-d501adb4d7fd", APIVersion:"v1", ResourceVersion:"394", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-092258_43824c0e-5444-4d63-9465-8f0bcb9e3d2b became leader
	I1121 14:44:33.946309       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-092258_43824c0e-5444-4d63-9465-8f0bcb9e3d2b!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-092258 -n old-k8s-version-092258
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-092258 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (13.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (14.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-208006 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0ea2f04f-bf79-4916-b7c0-6bb05a5c87d0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [0ea2f04f-bf79-4916-b7c0-6bb05a5c87d0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004174652s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-208006 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-208006
helpers_test.go:243: (dbg) docker inspect no-preload-208006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1e0c093eb824fed64933b797940c8238d911d3aec150dd510e66d5280b96bb39",
	        "Created": "2025-11-21T14:46:10.890663049Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2844438,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:46:11.006893988Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/1e0c093eb824fed64933b797940c8238d911d3aec150dd510e66d5280b96bb39/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1e0c093eb824fed64933b797940c8238d911d3aec150dd510e66d5280b96bb39/hostname",
	        "HostsPath": "/var/lib/docker/containers/1e0c093eb824fed64933b797940c8238d911d3aec150dd510e66d5280b96bb39/hosts",
	        "LogPath": "/var/lib/docker/containers/1e0c093eb824fed64933b797940c8238d911d3aec150dd510e66d5280b96bb39/1e0c093eb824fed64933b797940c8238d911d3aec150dd510e66d5280b96bb39-json.log",
	        "Name": "/no-preload-208006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-208006:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-208006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1e0c093eb824fed64933b797940c8238d911d3aec150dd510e66d5280b96bb39",
	                "LowerDir": "/var/lib/docker/overlay2/97e93d9d36c6404e1fdf7fc16f810513e13debddc7944f6c50d7d862f1c990f9-init/diff:/var/lib/docker/overlay2/789a4b9f9866e585907664b1eaf98d94438dbf699e0511f3ca5ba5ea682b005e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/97e93d9d36c6404e1fdf7fc16f810513e13debddc7944f6c50d7d862f1c990f9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/97e93d9d36c6404e1fdf7fc16f810513e13debddc7944f6c50d7d862f1c990f9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/97e93d9d36c6404e1fdf7fc16f810513e13debddc7944f6c50d7d862f1c990f9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-208006",
	                "Source": "/var/lib/docker/volumes/no-preload-208006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-208006",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-208006",
	                "name.minikube.sigs.k8s.io": "no-preload-208006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eae5e8bf8eea6950a367f46a305b79da8296b01966992ee5d4549339734788a5",
	            "SandboxKey": "/var/run/docker/netns/eae5e8bf8eea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36730"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36731"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36734"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36732"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36733"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-208006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:43:94:ae:63:fb",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d96be44a654a15cf0b4e08c5c304476e6d24f8af31c19cc13890d475bc3c99d2",
	                    "EndpointID": "863b271eb0de3e7731739eef15f46eb2be0b2a73b82a89773b2ab8882a5b8cbe",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-208006",
	                        "1e0c093eb824"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-208006 -n no-preload-208006
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-208006 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-208006 logs -n 25: (1.235778289s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ ssh     │ -p cilium-650772 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo crio config                                                                                                                                                                                                                   │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ delete  │ -p cilium-650772                                                                                                                                                                                                                                    │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ start   │ -p cert-expiration-184410 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-184410   │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ force-systemd-env-041746 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-041746 │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ delete  │ -p force-systemd-env-041746                                                                                                                                                                                                                         │ force-systemd-env-041746 │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:43 UTC │
	│ start   │ -p cert-options-035007 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-035007      │ jenkins │ v1.37.0 │ 21 Nov 25 14:43 UTC │ 21 Nov 25 14:43 UTC │
	│ ssh     │ cert-options-035007 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-035007      │ jenkins │ v1.37.0 │ 21 Nov 25 14:43 UTC │ 21 Nov 25 14:43 UTC │
	│ ssh     │ -p cert-options-035007 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-035007      │ jenkins │ v1.37.0 │ 21 Nov 25 14:43 UTC │ 21 Nov 25 14:43 UTC │
	│ delete  │ -p cert-options-035007                                                                                                                                                                                                                              │ cert-options-035007      │ jenkins │ v1.37.0 │ 21 Nov 25 14:43 UTC │ 21 Nov 25 14:43 UTC │
	│ start   │ -p old-k8s-version-092258 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:43 UTC │ 21 Nov 25 14:44 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-092258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:44 UTC │ 21 Nov 25 14:44 UTC │
	│ stop    │ -p old-k8s-version-092258 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:44 UTC │ 21 Nov 25 14:45 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-092258 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:45 UTC │ 21 Nov 25 14:45 UTC │
	│ start   │ -p old-k8s-version-092258 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:45 UTC │ 21 Nov 25 14:45 UTC │
	│ start   │ -p cert-expiration-184410 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-184410   │ jenkins │ v1.37.0 │ 21 Nov 25 14:45 UTC │ 21 Nov 25 14:46 UTC │
	│ delete  │ -p cert-expiration-184410                                                                                                                                                                                                                           │ cert-expiration-184410   │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │ 21 Nov 25 14:46 UTC │
	│ image   │ old-k8s-version-092258 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │ 21 Nov 25 14:46 UTC │
	│ pause   │ -p old-k8s-version-092258 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │ 21 Nov 25 14:46 UTC │
	│ start   │ -p no-preload-208006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-208006        │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │ 21 Nov 25 14:47 UTC │
	│ unpause │ -p old-k8s-version-092258 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │ 21 Nov 25 14:46 UTC │
	│ delete  │ -p old-k8s-version-092258                                                                                                                                                                                                                           │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │ 21 Nov 25 14:46 UTC │
	│ delete  │ -p old-k8s-version-092258                                                                                                                                                                                                                           │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │ 21 Nov 25 14:46 UTC │
	│ start   │ -p embed-certs-695324 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-695324       │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:46:16
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:46:16.326993 2845792 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:46:16.327104 2845792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:46:16.327116 2845792 out.go:374] Setting ErrFile to fd 2...
	I1121 14:46:16.327122 2845792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:46:16.327480 2845792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-2633933/.minikube/bin
	I1121 14:46:16.327951 2845792 out.go:368] Setting JSON to false
	I1121 14:46:16.328831 2845792 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":70125,"bootTime":1763666252,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1121 14:46:16.328919 2845792 start.go:143] virtualization:  
	I1121 14:46:16.331788 2845792 out.go:179] * [embed-certs-695324] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 14:46:16.335181 2845792 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:46:16.335232 2845792 notify.go:221] Checking for updates...
	I1121 14:46:16.340775 2845792 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:46:16.343361 2845792 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-2633933/kubeconfig
	I1121 14:46:16.345968 2845792 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-2633933/.minikube
	I1121 14:46:16.349257 2845792 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 14:46:16.351997 2845792 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:46:16.355115 2845792 config.go:182] Loaded profile config "no-preload-208006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:46:16.355263 2845792 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:46:16.400131 2845792 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 14:46:16.400273 2845792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:46:16.491502 2845792 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:47 OomKillDisable:true NGoroutines:60 SystemTime:2025-11-21 14:46:16.482277837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:46:16.491609 2845792 docker.go:319] overlay module found
	I1121 14:46:16.494579 2845792 out.go:179] * Using the docker driver based on user configuration
	I1121 14:46:16.497276 2845792 start.go:309] selected driver: docker
	I1121 14:46:16.497309 2845792 start.go:930] validating driver "docker" against <nil>
	I1121 14:46:16.497328 2845792 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:46:16.498032 2845792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:46:16.564481 2845792 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:47 OomKillDisable:true NGoroutines:60 SystemTime:2025-11-21 14:46:16.554548265 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:46:16.564662 2845792 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 14:46:16.564884 2845792 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:46:16.567782 2845792 out.go:179] * Using Docker driver with root privileges
	I1121 14:46:16.570600 2845792 cni.go:84] Creating CNI manager for ""
	I1121 14:46:16.570679 2845792 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:46:16.570696 2845792 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 14:46:16.570780 2845792 start.go:353] cluster config:
	{Name:embed-certs-695324 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-695324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:46:16.573739 2845792 out.go:179] * Starting "embed-certs-695324" primary control-plane node in "embed-certs-695324" cluster
	I1121 14:46:16.576483 2845792 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1121 14:46:16.579327 2845792 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:46:16.582077 2845792 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:46:16.582134 2845792 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1121 14:46:16.582148 2845792 cache.go:65] Caching tarball of preloaded images
	I1121 14:46:16.582231 2845792 preload.go:238] Found /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1121 14:46:16.582250 2845792 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1121 14:46:16.582363 2845792 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/config.json ...
	I1121 14:46:16.582386 2845792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/config.json: {Name:mke14d63735a3a2e3fa6310a5ff7f022bfb6b94e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:46:16.582540 2845792 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:46:16.608364 2845792 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:46:16.608388 2845792 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:46:16.608401 2845792 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:46:16.608425 2845792 start.go:360] acquireMachinesLock for embed-certs-695324: {Name:mkc2e7d115c6f1cd0f9b5fd1683b9702ddf4b916 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:46:16.608531 2845792 start.go:364] duration metric: took 86.274µs to acquireMachinesLock for "embed-certs-695324"
	I1121 14:46:16.608564 2845792 start.go:93] Provisioning new machine with config: &{Name:embed-certs-695324 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-695324 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:46:16.608656 2845792 start.go:125] createHost starting for "" (driver="docker")
	I1121 14:46:15.804828 2843875 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-208006
	
	I1121 14:46:15.804858 2843875 ubuntu.go:182] provisioning hostname "no-preload-208006"
	I1121 14:46:15.804938 2843875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:46:15.831238 2843875 main.go:143] libmachine: Using SSH client type: native
	I1121 14:46:15.831569 2843875 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 36730 <nil> <nil>}
	I1121 14:46:15.831585 2843875 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-208006 && echo "no-preload-208006" | sudo tee /etc/hostname
	I1121 14:46:15.997326 2843875 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-208006
	
	I1121 14:46:15.997403 2843875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:46:16.032482 2843875 main.go:143] libmachine: Using SSH client type: native
	I1121 14:46:16.032819 2843875 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 36730 <nil> <nil>}
	I1121 14:46:16.032844 2843875 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-208006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-208006/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-208006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:46:16.193875 2843875 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:46:16.193917 2843875 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-2633933/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-2633933/.minikube}
	I1121 14:46:16.193948 2843875 ubuntu.go:190] setting up certificates
	I1121 14:46:16.193958 2843875 provision.go:84] configureAuth start
	I1121 14:46:16.194023 2843875 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-208006
	I1121 14:46:16.214793 2843875 provision.go:143] copyHostCerts
	I1121 14:46:16.214865 2843875 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-2633933/.minikube/key.pem, removing ...
	I1121 14:46:16.214875 2843875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-2633933/.minikube/key.pem
	I1121 14:46:16.214951 2843875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-2633933/.minikube/key.pem (1679 bytes)
	I1121 14:46:16.215051 2843875 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.pem, removing ...
	I1121 14:46:16.215056 2843875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.pem
	I1121 14:46:16.215081 2843875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.pem (1082 bytes)
	I1121 14:46:16.215141 2843875 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-2633933/.minikube/cert.pem, removing ...
	I1121 14:46:16.215145 2843875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-2633933/.minikube/cert.pem
	I1121 14:46:16.215169 2843875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-2633933/.minikube/cert.pem (1123 bytes)
	I1121 14:46:16.215224 2843875 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca-key.pem org=jenkins.no-preload-208006 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-208006]
	I1121 14:46:16.644766 2843875 provision.go:177] copyRemoteCerts
	I1121 14:46:16.646436 2843875 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:46:16.646519 2843875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:46:16.666316 2843875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36730 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/no-preload-208006/id_rsa Username:docker}
	I1121 14:46:16.798116 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1121 14:46:16.819276 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1121 14:46:16.840174 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 14:46:16.867645 2843875 provision.go:87] duration metric: took 673.658795ms to configureAuth
	I1121 14:46:16.867690 2843875 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:46:16.867884 2843875 config.go:182] Loaded profile config "no-preload-208006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:46:16.867893 2843875 machine.go:97] duration metric: took 4.275245704s to provisionDockerMachine
	I1121 14:46:16.867900 2843875 client.go:176] duration metric: took 7.089985864s to LocalClient.Create
	I1121 14:46:16.867920 2843875 start.go:167] duration metric: took 7.09009768s to libmachine.API.Create "no-preload-208006"
	I1121 14:46:16.867929 2843875 start.go:293] postStartSetup for "no-preload-208006" (driver="docker")
	I1121 14:46:16.867952 2843875 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:46:16.868009 2843875 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:46:16.868050 2843875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:46:16.886819 2843875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36730 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/no-preload-208006/id_rsa Username:docker}
	I1121 14:46:17.003877 2843875 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:46:17.008553 2843875 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:46:17.008583 2843875 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:46:17.008608 2843875 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-2633933/.minikube/addons for local assets ...
	I1121 14:46:17.008680 2843875 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-2633933/.minikube/files for local assets ...
	I1121 14:46:17.008759 2843875 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem -> 26357852.pem in /etc/ssl/certs
	I1121 14:46:17.008857 2843875 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:46:17.021454 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem --> /etc/ssl/certs/26357852.pem (1708 bytes)
	I1121 14:46:17.046785 2843875 start.go:296] duration metric: took 178.84026ms for postStartSetup
	I1121 14:46:17.047464 2843875 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-208006
	I1121 14:46:17.071160 2843875 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/config.json ...
	I1121 14:46:17.071388 2843875 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:46:17.071429 2843875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:46:17.096355 2843875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36730 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/no-preload-208006/id_rsa Username:docker}
	I1121 14:46:17.229217 2843875 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:46:17.237671 2843875 start.go:128] duration metric: took 7.467246221s to createHost
	I1121 14:46:17.237737 2843875 start.go:83] releasing machines lock for "no-preload-208006", held for 7.467415857s
	I1121 14:46:17.237833 2843875 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-208006
	I1121 14:46:17.267402 2843875 ssh_runner.go:195] Run: cat /version.json
	I1121 14:46:17.267454 2843875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:46:17.267686 2843875 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:46:17.267757 2843875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:46:17.285868 2843875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36730 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/no-preload-208006/id_rsa Username:docker}
	I1121 14:46:17.294395 2843875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36730 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/no-preload-208006/id_rsa Username:docker}
	I1121 14:46:17.422212 2843875 ssh_runner.go:195] Run: systemctl --version
	I1121 14:46:17.534821 2843875 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:46:17.547177 2843875 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:46:17.547248 2843875 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:46:17.586563 2843875 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1121 14:46:17.586583 2843875 start.go:496] detecting cgroup driver to use...
	I1121 14:46:17.586614 2843875 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 14:46:17.586665 2843875 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1121 14:46:17.606535 2843875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1121 14:46:17.621163 2843875 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:46:17.621284 2843875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:46:17.641636 2843875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:46:17.660887 2843875 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:46:17.797691 2843875 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:46:17.967227 2843875 docker.go:234] disabling docker service ...
	I1121 14:46:17.967298 2843875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:46:17.993890 2843875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:46:18.010760 2843875 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:46:18.165382 2843875 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:46:18.315505 2843875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:46:18.330184 2843875 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:46:18.345512 2843875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1121 14:46:18.354334 2843875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1121 14:46:18.363523 2843875 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1121 14:46:18.363592 2843875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1121 14:46:18.371934 2843875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:46:18.380389 2843875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1121 14:46:18.388714 2843875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:46:18.397100 2843875 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:46:18.404900 2843875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1121 14:46:18.413357 2843875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1121 14:46:18.421810 2843875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1121 14:46:18.430502 2843875 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:46:18.438080 2843875 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:46:18.445378 2843875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:46:18.587718 2843875 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1121 14:46:18.693876 2843875 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1121 14:46:18.693948 2843875 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1121 14:46:18.698190 2843875 start.go:564] Will wait 60s for crictl version
	I1121 14:46:18.698256 2843875 ssh_runner.go:195] Run: which crictl
	I1121 14:46:18.706284 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:46:18.769772 2843875 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1121 14:46:18.769844 2843875 ssh_runner.go:195] Run: containerd --version
	I1121 14:46:18.791958 2843875 ssh_runner.go:195] Run: containerd --version
	I1121 14:46:18.826059 2843875 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1121 14:46:18.829243 2843875 cli_runner.go:164] Run: docker network inspect no-preload-208006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:46:18.848181 2843875 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1121 14:46:18.852277 2843875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:46:18.862918 2843875 kubeadm.go:884] updating cluster {Name:no-preload-208006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-208006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:46:18.863033 2843875 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:46:18.863082 2843875 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:46:18.891641 2843875 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1121 14:46:18.891668 2843875 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1121 14:46:18.891704 2843875 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:46:18.891913 2843875 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:46:18.892013 2843875 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:46:18.892105 2843875 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:46:18.892192 2843875 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:46:18.892275 2843875 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1121 14:46:18.892356 2843875 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1121 14:46:18.892445 2843875 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:46:18.895334 2843875 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1121 14:46:18.895632 2843875 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:46:18.895816 2843875 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:46:18.895973 2843875 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:46:18.896120 2843875 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:46:18.896434 2843875 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:46:18.896690 2843875 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1121 14:46:18.896914 2843875 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:46:19.153519 2843875 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc"
	I1121 14:46:19.153641 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:46:19.153810 2843875 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.4-0" and sha "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e"
	I1121 14:46:19.153879 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.4-0
	I1121 14:46:19.154772 2843875 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196"
	I1121 14:46:19.154864 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:46:19.158179 2843875 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9"
	I1121 14:46:19.158294 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:46:19.165306 2843875 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1121 14:46:19.165424 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1121 14:46:19.193299 2843875 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0"
	I1121 14:46:19.193422 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:46:19.197269 2843875 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a"
	I1121 14:46:19.197384 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:46:19.272757 2843875 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1121 14:46:19.272850 2843875 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1121 14:46:19.272932 2843875 ssh_runner.go:195] Run: which crictl
	I1121 14:46:19.273046 2843875 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1121 14:46:19.273085 2843875 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:46:19.273142 2843875 ssh_runner.go:195] Run: which crictl
	I1121 14:46:19.273247 2843875 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1121 14:46:19.273294 2843875 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:46:19.273337 2843875 ssh_runner.go:195] Run: which crictl
	I1121 14:46:19.274544 2843875 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1121 14:46:19.274650 2843875 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:46:19.274720 2843875 ssh_runner.go:195] Run: which crictl
	I1121 14:46:19.308807 2843875 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1121 14:46:19.308900 2843875 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:46:19.308979 2843875 ssh_runner.go:195] Run: which crictl
	I1121 14:46:19.320250 2843875 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1121 14:46:19.320489 2843875 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:46:19.320545 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:46:19.320581 2843875 ssh_runner.go:195] Run: which crictl
	I1121 14:46:19.320489 2843875 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1121 14:46:19.320621 2843875 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1121 14:46:19.320649 2843875 ssh_runner.go:195] Run: which crictl
	I1121 14:46:19.320449 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:46:19.320530 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:46:19.320376 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:46:19.324700 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:46:19.455679 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:46:19.455688 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:46:19.461927 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:46:19.461961 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:46:19.461999 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:46:19.463309 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:46:16.611879 2845792 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1121 14:46:16.612093 2845792 start.go:159] libmachine.API.Create for "embed-certs-695324" (driver="docker")
	I1121 14:46:16.612133 2845792 client.go:173] LocalClient.Create starting
	I1121 14:46:16.612196 2845792 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem
	I1121 14:46:16.612236 2845792 main.go:143] libmachine: Decoding PEM data...
	I1121 14:46:16.612253 2845792 main.go:143] libmachine: Parsing certificate...
	I1121 14:46:16.612307 2845792 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem
	I1121 14:46:16.612333 2845792 main.go:143] libmachine: Decoding PEM data...
	I1121 14:46:16.612344 2845792 main.go:143] libmachine: Parsing certificate...
	I1121 14:46:16.612716 2845792 cli_runner.go:164] Run: docker network inspect embed-certs-695324 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 14:46:16.646448 2845792 cli_runner.go:211] docker network inspect embed-certs-695324 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 14:46:16.646514 2845792 network_create.go:284] running [docker network inspect embed-certs-695324] to gather additional debugging logs...
	I1121 14:46:16.646531 2845792 cli_runner.go:164] Run: docker network inspect embed-certs-695324
	W1121 14:46:16.660808 2845792 cli_runner.go:211] docker network inspect embed-certs-695324 returned with exit code 1
	I1121 14:46:16.660841 2845792 network_create.go:287] error running [docker network inspect embed-certs-695324]: docker network inspect embed-certs-695324: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-695324 not found
	I1121 14:46:16.660857 2845792 network_create.go:289] output of [docker network inspect embed-certs-695324]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-695324 not found
	
	** /stderr **
	I1121 14:46:16.660948 2845792 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:46:16.679850 2845792 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c13a3bee40ff IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:9f:8e:c6:2a:d6} reservation:<nil>}
	I1121 14:46:16.680123 2845792 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1859e8fd5584 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:c6:00:f6:5b:96} reservation:<nil>}
	I1121 14:46:16.680363 2845792 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-44a9b6062c4d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:b5:31:a5:3d:f0} reservation:<nil>}
	I1121 14:46:16.680806 2845792 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019654d0}
	I1121 14:46:16.680824 2845792 network_create.go:124] attempt to create docker network embed-certs-695324 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1121 14:46:16.680877 2845792 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-695324 embed-certs-695324
	I1121 14:46:16.764398 2845792 network_create.go:108] docker network embed-certs-695324 192.168.76.0/24 created
	I1121 14:46:16.764426 2845792 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-695324" container
	I1121 14:46:16.764513 2845792 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 14:46:16.781184 2845792 cli_runner.go:164] Run: docker volume create embed-certs-695324 --label name.minikube.sigs.k8s.io=embed-certs-695324 --label created_by.minikube.sigs.k8s.io=true
	I1121 14:46:16.805680 2845792 oci.go:103] Successfully created a docker volume embed-certs-695324
	I1121 14:46:16.805771 2845792 cli_runner.go:164] Run: docker run --rm --name embed-certs-695324-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-695324 --entrypoint /usr/bin/test -v embed-certs-695324:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1121 14:46:17.385626 2845792 oci.go:107] Successfully prepared a docker volume embed-certs-695324
	I1121 14:46:17.385698 2845792 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:46:17.385708 2845792 kic.go:194] Starting extracting preloaded images to volume ...
	I1121 14:46:17.385773 2845792 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-695324:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1121 14:46:19.516354 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:46:19.674508 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:46:19.674559 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:46:19.674621 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:46:19.674643 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:46:19.674684 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:46:19.674718 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:46:19.727578 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:46:19.908931 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:46:19.909074 2843875 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1121 14:46:19.909168 2843875 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:46:19.909244 2843875 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1121 14:46:19.909322 2843875 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:46:19.909390 2843875 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1121 14:46:19.909460 2843875 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:46:19.909537 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:46:19.909601 2843875 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1121 14:46:19.909672 2843875 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:46:19.942158 2843875 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1121 14:46:19.942297 2843875 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:46:19.999033 2843875 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1121 14:46:19.999238 2843875 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1121 14:46:19.999261 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1121 14:46:19.999063 2843875 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1121 14:46:19.999327 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1121 14:46:19.999090 2843875 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1121 14:46:19.999371 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1121 14:46:19.999108 2843875 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1121 14:46:19.999426 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1121 14:46:19.999492 2843875 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1121 14:46:19.999586 2843875 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:46:19.999649 2843875 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1121 14:46:19.999667 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1121 14:46:19.999727 2843875 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1121 14:46:20.110939 2843875 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1121 14:46:20.111022 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1121 14:46:20.111109 2843875 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1121 14:46:20.111146 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	W1121 14:46:20.215892 2843875 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1121 14:46:20.216111 2843875 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1121 14:46:20.216208 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:46:20.342555 2843875 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1121 14:46:20.342678 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1121 14:46:20.435044 2843875 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1121 14:46:20.435101 2843875 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:46:20.435198 2843875 ssh_runner.go:195] Run: which crictl
	I1121 14:46:20.854172 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:46:20.854278 2843875 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1121 14:46:20.854316 2843875 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:46:20.854367 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:46:24.097961 2843875 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1: (3.243552672s)
	I1121 14:46:24.097972 2843875 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.243728495s)
	I1121 14:46:24.097984 2843875 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1121 14:46:24.098001 2843875 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:46:24.098044 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:46:24.098109 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:46:23.085129 2845792 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-695324:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (5.699321089s)
	I1121 14:46:23.085159 2845792 kic.go:203] duration metric: took 5.699447592s to extract preloaded images to volume ...
	W1121 14:46:23.085298 2845792 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1121 14:46:23.085403 2845792 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 14:46:23.176190 2845792 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-695324 --name embed-certs-695324 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-695324 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-695324 --network embed-certs-695324 --ip 192.168.76.2 --volume embed-certs-695324:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1121 14:46:23.538701 2845792 cli_runner.go:164] Run: docker container inspect embed-certs-695324 --format={{.State.Running}}
	I1121 14:46:23.562031 2845792 cli_runner.go:164] Run: docker container inspect embed-certs-695324 --format={{.State.Status}}
	I1121 14:46:23.584821 2845792 cli_runner.go:164] Run: docker exec embed-certs-695324 stat /var/lib/dpkg/alternatives/iptables
	I1121 14:46:23.645344 2845792 oci.go:144] the created container "embed-certs-695324" has a running status.
	I1121 14:46:23.645369 2845792 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/embed-certs-695324/id_rsa...
	I1121 14:46:25.298742 2845792 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/embed-certs-695324/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 14:46:25.322579 2845792 cli_runner.go:164] Run: docker container inspect embed-certs-695324 --format={{.State.Status}}
	I1121 14:46:25.340758 2845792 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 14:46:25.340783 2845792 kic_runner.go:114] Args: [docker exec --privileged embed-certs-695324 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 14:46:25.422837 2845792 cli_runner.go:164] Run: docker container inspect embed-certs-695324 --format={{.State.Status}}
	I1121 14:46:25.448579 2845792 machine.go:94] provisionDockerMachine start ...
	I1121 14:46:25.448698 2845792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-695324
	I1121 14:46:25.472700 2845792 main.go:143] libmachine: Using SSH client type: native
	I1121 14:46:25.473059 2845792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 36735 <nil> <nil>}
	I1121 14:46:25.473077 2845792 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:46:25.473757 2845792 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1121 14:46:25.954762 2843875 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1: (1.856695361s)
	I1121 14:46:25.954790 2843875 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1121 14:46:25.954808 2843875 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:46:25.954854 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:46:25.954919 2843875 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.856767293s)
	I1121 14:46:25.954958 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:46:27.000276 2843875 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.045289145s)
	I1121 14:46:27.000291 2843875 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (1.045413088s)
	I1121 14:46:27.000310 2843875 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1121 14:46:27.000325 2843875 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1121 14:46:27.000330 2843875 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:46:27.000383 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:46:27.000413 2843875 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:46:27.894033 2843875 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1121 14:46:27.894075 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1121 14:46:27.894124 2843875 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1121 14:46:27.894148 2843875 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:46:27.894193 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:46:29.189125 2843875 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.29490269s)
	I1121 14:46:29.189154 2843875 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1121 14:46:29.189174 2843875 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:46:29.189223 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:46:28.617194 2845792 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-695324
	
	I1121 14:46:28.617261 2845792 ubuntu.go:182] provisioning hostname "embed-certs-695324"
	I1121 14:46:28.617363 2845792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-695324
	I1121 14:46:28.638734 2845792 main.go:143] libmachine: Using SSH client type: native
	I1121 14:46:28.639047 2845792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 36735 <nil> <nil>}
	I1121 14:46:28.639058 2845792 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-695324 && echo "embed-certs-695324" | sudo tee /etc/hostname
	I1121 14:46:28.806918 2845792 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-695324
	
	I1121 14:46:28.807111 2845792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-695324
	I1121 14:46:28.842793 2845792 main.go:143] libmachine: Using SSH client type: native
	I1121 14:46:28.843096 2845792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 36735 <nil> <nil>}
	I1121 14:46:28.843112 2845792 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-695324' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-695324/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-695324' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:46:28.989484 2845792 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:46:28.989562 2845792 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-2633933/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-2633933/.minikube}
	I1121 14:46:28.989598 2845792 ubuntu.go:190] setting up certificates
	I1121 14:46:28.989645 2845792 provision.go:84] configureAuth start
	I1121 14:46:28.989741 2845792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-695324
	I1121 14:46:29.022787 2845792 provision.go:143] copyHostCerts
	I1121 14:46:29.022868 2845792 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.pem, removing ...
	I1121 14:46:29.022877 2845792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.pem
	I1121 14:46:29.022948 2845792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.pem (1082 bytes)
	I1121 14:46:29.023034 2845792 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-2633933/.minikube/cert.pem, removing ...
	I1121 14:46:29.023039 2845792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-2633933/.minikube/cert.pem
	I1121 14:46:29.023069 2845792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-2633933/.minikube/cert.pem (1123 bytes)
	I1121 14:46:29.023120 2845792 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-2633933/.minikube/key.pem, removing ...
	I1121 14:46:29.023125 2845792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-2633933/.minikube/key.pem
	I1121 14:46:29.023148 2845792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-2633933/.minikube/key.pem (1679 bytes)
	I1121 14:46:29.023191 2845792 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca-key.pem org=jenkins.embed-certs-695324 san=[127.0.0.1 192.168.76.2 embed-certs-695324 localhost minikube]
	I1121 14:46:29.570345 2845792 provision.go:177] copyRemoteCerts
	I1121 14:46:29.570454 2845792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:46:29.570538 2845792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-695324
	I1121 14:46:29.598483 2845792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36735 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/embed-certs-695324/id_rsa Username:docker}
	I1121 14:46:29.701473 2845792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1121 14:46:29.722045 2845792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1121 14:46:29.741862 2845792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 14:46:29.762287 2845792 provision.go:87] duration metric: took 772.611896ms to configureAuth
	I1121 14:46:29.762355 2845792 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:46:29.762569 2845792 config.go:182] Loaded profile config "embed-certs-695324": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:46:29.762605 2845792 machine.go:97] duration metric: took 4.314000911s to provisionDockerMachine
	I1121 14:46:29.762631 2845792 client.go:176] duration metric: took 13.150486471s to LocalClient.Create
	I1121 14:46:29.762734 2845792 start.go:167] duration metric: took 13.150641412s to libmachine.API.Create "embed-certs-695324"
	I1121 14:46:29.762766 2845792 start.go:293] postStartSetup for "embed-certs-695324" (driver="docker")
	I1121 14:46:29.762794 2845792 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:46:29.762882 2845792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:46:29.762944 2845792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-695324
	I1121 14:46:29.783202 2845792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36735 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/embed-certs-695324/id_rsa Username:docker}
	I1121 14:46:29.887259 2845792 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:46:29.891015 2845792 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:46:29.891045 2845792 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:46:29.891055 2845792 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-2633933/.minikube/addons for local assets ...
	I1121 14:46:29.891110 2845792 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-2633933/.minikube/files for local assets ...
	I1121 14:46:29.891193 2845792 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem -> 26357852.pem in /etc/ssl/certs
	I1121 14:46:29.891296 2845792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:46:29.899723 2845792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem --> /etc/ssl/certs/26357852.pem (1708 bytes)
	I1121 14:46:29.920828 2845792 start.go:296] duration metric: took 158.030476ms for postStartSetup
	I1121 14:46:29.921336 2845792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-695324
	I1121 14:46:29.939087 2845792 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/config.json ...
	I1121 14:46:29.939375 2845792 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:46:29.939417 2845792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-695324
	I1121 14:46:29.959478 2845792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36735 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/embed-certs-695324/id_rsa Username:docker}
	I1121 14:46:30.067250 2845792 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:46:30.073376 2845792 start.go:128] duration metric: took 13.464704227s to createHost
	I1121 14:46:30.073407 2845792 start.go:83] releasing machines lock for "embed-certs-695324", held for 13.464860202s
	I1121 14:46:30.073489 2845792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-695324
	I1121 14:46:30.091548 2845792 ssh_runner.go:195] Run: cat /version.json
	I1121 14:46:30.091564 2845792 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:46:30.091604 2845792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-695324
	I1121 14:46:30.091643 2845792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-695324
	I1121 14:46:30.129854 2845792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36735 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/embed-certs-695324/id_rsa Username:docker}
	I1121 14:46:30.137554 2845792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36735 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/embed-certs-695324/id_rsa Username:docker}
	I1121 14:46:30.245801 2845792 ssh_runner.go:195] Run: systemctl --version
	I1121 14:46:30.350964 2845792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:46:30.356593 2845792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:46:30.356741 2845792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:46:30.394493 2845792 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1121 14:46:30.394533 2845792 start.go:496] detecting cgroup driver to use...
	I1121 14:46:30.394566 2845792 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 14:46:30.394628 2845792 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1121 14:46:30.410709 2845792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1121 14:46:30.425437 2845792 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:46:30.425513 2845792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:46:30.442999 2845792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:46:30.462984 2845792 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:46:30.635865 2845792 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:46:30.776089 2845792 docker.go:234] disabling docker service ...
	I1121 14:46:30.776163 2845792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:46:30.801984 2845792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:46:30.816781 2845792 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:46:30.966476 2845792 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:46:31.101428 2845792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:46:31.116189 2845792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:46:31.134844 2845792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1121 14:46:31.146050 2845792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1121 14:46:31.161494 2845792 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1121 14:46:31.161608 2845792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1121 14:46:31.172353 2845792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:46:31.182106 2845792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1121 14:46:31.194710 2845792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:46:31.203930 2845792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:46:31.213693 2845792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1121 14:46:31.223034 2845792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1121 14:46:31.231804 2845792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1121 14:46:31.241520 2845792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:46:31.248849 2845792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:46:31.256106 2845792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:46:31.411790 2845792 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1121 14:46:31.619638 2845792 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1121 14:46:31.619759 2845792 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1121 14:46:31.624454 2845792 start.go:564] Will wait 60s for crictl version
	I1121 14:46:31.624569 2845792 ssh_runner.go:195] Run: which crictl
	I1121 14:46:31.634889 2845792 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:46:31.685642 2845792 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1121 14:46:31.685764 2845792 ssh_runner.go:195] Run: containerd --version
	I1121 14:46:31.706549 2845792 ssh_runner.go:195] Run: containerd --version
	I1121 14:46:31.733431 2845792 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1121 14:46:31.736500 2845792 cli_runner.go:164] Run: docker network inspect embed-certs-695324 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:46:31.760669 2845792 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1121 14:46:31.765115 2845792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:46:31.775687 2845792 kubeadm.go:884] updating cluster {Name:embed-certs-695324 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-695324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:46:31.775803 2845792 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:46:31.775861 2845792 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:46:31.810005 2845792 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:46:31.810025 2845792 containerd.go:534] Images already preloaded, skipping extraction
	I1121 14:46:31.810084 2845792 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:46:31.839637 2845792 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:46:31.839708 2845792 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:46:31.839731 2845792 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1121 14:46:31.839867 2845792 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-695324 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-695324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:46:31.839969 2845792 ssh_runner.go:195] Run: sudo crictl info
	I1121 14:46:31.867506 2845792 cni.go:84] Creating CNI manager for ""
	I1121 14:46:31.867526 2845792 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:46:31.867544 2845792 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:46:31.867566 2845792 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-695324 NodeName:embed-certs-695324 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:46:31.867681 2845792 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-695324"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:46:31.867745 2845792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:46:31.876564 2845792 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:46:31.876642 2845792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:46:31.884832 2845792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1121 14:46:31.898777 2845792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:46:31.913522 2845792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1121 14:46:31.927706 2845792 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:46:31.931858 2845792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:46:31.942031 2845792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:46:32.106493 2845792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:46:32.125935 2845792 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324 for IP: 192.168.76.2
	I1121 14:46:32.126008 2845792 certs.go:195] generating shared ca certs ...
	I1121 14:46:32.126042 2845792 certs.go:227] acquiring lock for ca certs: {Name:mk0a1b8efa9f1d453751b4f7afafeea16d7243a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:46:32.126242 2845792 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.key
	I1121 14:46:32.126329 2845792 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/proxy-client-ca.key
	I1121 14:46:32.126379 2845792 certs.go:257] generating profile certs ...
	I1121 14:46:32.126486 2845792 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/client.key
	I1121 14:46:32.126520 2845792 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/client.crt with IP's: []
	I1121 14:46:32.588460 2845792 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/client.crt ...
	I1121 14:46:32.588534 2845792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/client.crt: {Name:mk8fad0fe6ddd8ca3ea8e59602e9b95d3e1e2e66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:46:32.588753 2845792 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/client.key ...
	I1121 14:46:32.588794 2845792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/client.key: {Name:mk9fa67e0e4f3c9d0d7f7d4a93fdd091a5ebe542 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:46:32.588930 2845792 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/apiserver.key.f2f4e569
	I1121 14:46:32.588973 2845792 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/apiserver.crt.f2f4e569 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1121 14:46:33.015392 2845792 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/apiserver.crt.f2f4e569 ...
	I1121 14:46:33.015502 2845792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/apiserver.crt.f2f4e569: {Name:mke85b18f77dc07d9b05f4b95b9d2e9b941dbefa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:46:33.015759 2845792 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/apiserver.key.f2f4e569 ...
	I1121 14:46:33.015796 2845792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/apiserver.key.f2f4e569: {Name:mk076c6ec186d21de6b0c211f54328fe2ad889e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:46:33.015956 2845792 certs.go:382] copying /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/apiserver.crt.f2f4e569 -> /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/apiserver.crt
	I1121 14:46:33.016083 2845792 certs.go:386] copying /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/apiserver.key.f2f4e569 -> /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/apiserver.key
	I1121 14:46:33.016175 2845792 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/proxy-client.key
	I1121 14:46:33.016228 2845792 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/proxy-client.crt with IP's: []
	I1121 14:46:33.213450 2845792 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/proxy-client.crt ...
	I1121 14:46:33.213525 2845792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/proxy-client.crt: {Name:mkd7670327930017620cb6fe39b50c2de2e744ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:46:33.213761 2845792 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/proxy-client.key ...
	I1121 14:46:33.213799 2845792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/proxy-client.key: {Name:mk6fdeb8d841ba53f5c563a5da2a1d7f25fa31d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:46:33.214053 2845792 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/2635785.pem (1338 bytes)
	W1121 14:46:33.214119 2845792 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/2635785_empty.pem, impossibly tiny 0 bytes
	I1121 14:46:33.214148 2845792 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:46:33.214207 2845792 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem (1082 bytes)
	I1121 14:46:33.214263 2845792 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:46:33.214310 2845792 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/key.pem (1679 bytes)
	I1121 14:46:33.214399 2845792 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem (1708 bytes)
	I1121 14:46:33.215065 2845792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:46:33.234800 2845792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 14:46:33.254649 2845792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:46:33.274323 2845792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:46:33.295607 2845792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1121 14:46:33.317422 2845792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:46:33.339400 2845792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:46:33.363593 2845792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1121 14:46:33.381006 2845792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/2635785.pem --> /usr/share/ca-certificates/2635785.pem (1338 bytes)
	I1121 14:46:33.398365 2845792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem --> /usr/share/ca-certificates/26357852.pem (1708 bytes)
	I1121 14:46:33.418401 2845792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:46:33.436339 2845792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:46:33.449672 2845792 ssh_runner.go:195] Run: openssl version
	I1121 14:46:33.456388 2845792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:46:33.464752 2845792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:46:33.468870 2845792 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:46:33.468951 2845792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:46:33.510910 2845792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:46:33.519392 2845792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2635785.pem && ln -fs /usr/share/ca-certificates/2635785.pem /etc/ssl/certs/2635785.pem"
	I1121 14:46:33.527736 2845792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2635785.pem
	I1121 14:46:33.532080 2845792 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:03 /usr/share/ca-certificates/2635785.pem
	I1121 14:46:33.532151 2845792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2635785.pem
	I1121 14:46:33.574734 2845792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2635785.pem /etc/ssl/certs/51391683.0"
	I1121 14:46:33.583500 2845792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26357852.pem && ln -fs /usr/share/ca-certificates/26357852.pem /etc/ssl/certs/26357852.pem"
	I1121 14:46:33.592193 2845792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26357852.pem
	I1121 14:46:33.596505 2845792 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:03 /usr/share/ca-certificates/26357852.pem
	I1121 14:46:33.596573 2845792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26357852.pem
	I1121 14:46:33.641310 2845792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/26357852.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:46:33.650758 2845792 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:46:33.655183 2845792 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:46:33.655285 2845792 kubeadm.go:401] StartCluster: {Name:embed-certs-695324 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-695324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:46:33.655391 2845792 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1121 14:46:33.655480 2845792 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:46:33.697520 2845792 cri.go:89] found id: ""
	I1121 14:46:33.697616 2845792 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:46:33.710358 2845792 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:46:33.727002 2845792 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:46:33.727078 2845792 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:46:33.739743 2845792 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:46:33.739769 2845792 kubeadm.go:158] found existing configuration files:
	
	I1121 14:46:33.739844 2845792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 14:46:33.749321 2845792 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:46:33.749401 2845792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:46:33.758298 2845792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 14:46:33.767612 2845792 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:46:33.767696 2845792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:46:33.777791 2845792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 14:46:33.790423 2845792 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:46:33.790505 2845792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:46:33.800394 2845792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 14:46:33.811598 2845792 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:46:33.811676 2845792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:46:33.821600 2845792 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:46:33.904758 2845792 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 14:46:33.905157 2845792 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:46:33.995601 2845792 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:46:33.995705 2845792 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1121 14:46:33.995746 2845792 kubeadm.go:319] OS: Linux
	I1121 14:46:33.995806 2845792 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:46:33.995859 2845792 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1121 14:46:33.995914 2845792 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:46:33.995967 2845792 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:46:33.996021 2845792 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:46:33.996078 2845792 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:46:33.996131 2845792 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:46:33.996185 2845792 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:46:33.996243 2845792 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1121 14:46:34.167135 2845792 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:46:34.167251 2845792 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:46:34.167355 2845792 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 14:46:34.192295 2845792 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:46:33.117130 2843875 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (3.927879936s)
	I1121 14:46:33.117159 2843875 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1121 14:46:33.117184 2843875 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:46:33.117246 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:46:33.634091 2843875 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1121 14:46:33.634130 2843875 cache_images.go:125] Successfully loaded all cached images
	I1121 14:46:33.634136 2843875 cache_images.go:94] duration metric: took 14.742456869s to LoadCachedImages
	I1121 14:46:33.634147 2843875 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1121 14:46:33.634240 2843875 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-208006 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-208006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:46:33.634311 2843875 ssh_runner.go:195] Run: sudo crictl info
	I1121 14:46:33.669782 2843875 cni.go:84] Creating CNI manager for ""
	I1121 14:46:33.669803 2843875 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:46:33.669821 2843875 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:46:33.669844 2843875 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-208006 NodeName:no-preload-208006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:46:33.669954 2843875 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-208006"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:46:33.670020 2843875 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:46:33.679009 2843875 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1121 14:46:33.679125 2843875 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1121 14:46:33.687573 2843875 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1121 14:46:33.687666 2843875 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1121 14:46:33.688405 2843875 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1121 14:46:33.688848 2843875 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1121 14:46:33.693447 2843875 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1121 14:46:33.693533 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1121 14:46:34.197160 2845792 out.go:252]   - Generating certificates and keys ...
	I1121 14:46:34.197309 2845792 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:46:34.197386 2845792 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:46:34.656022 2845792 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:46:34.758350 2845792 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:46:36.265068 2845792 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:46:34.669761 2843875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:46:34.686097 2843875 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1121 14:46:34.704593 2843875 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1121 14:46:34.704645 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1121 14:46:35.105312 2843875 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1121 14:46:35.119409 2843875 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1121 14:46:35.119462 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1121 14:46:35.550981 2843875 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:46:35.560747 2843875 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1121 14:46:35.575126 2843875 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:46:35.589278 2843875 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1121 14:46:35.603080 2843875 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:46:35.606888 2843875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:46:35.617122 2843875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:46:35.767604 2843875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:46:35.798388 2843875 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006 for IP: 192.168.85.2
	I1121 14:46:35.798462 2843875 certs.go:195] generating shared ca certs ...
	I1121 14:46:35.798494 2843875 certs.go:227] acquiring lock for ca certs: {Name:mk0a1b8efa9f1d453751b4f7afafeea16d7243a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:46:35.798691 2843875 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.key
	I1121 14:46:35.798761 2843875 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/proxy-client-ca.key
	I1121 14:46:35.798807 2843875 certs.go:257] generating profile certs ...
	I1121 14:46:35.798888 2843875 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.key
	I1121 14:46:35.798926 2843875 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.crt with IP's: []
	I1121 14:46:36.449366 2843875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.crt ...
	I1121 14:46:36.449399 2843875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.crt: {Name:mk063bf35af73b12fd837097b9d2c88810446514 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:46:36.449620 2843875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.key ...
	I1121 14:46:36.449635 2843875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.key: {Name:mkd5f39db09014633d4ad726504e48cbdcf85b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:46:36.449745 2843875 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/apiserver.key.78bb1819
	I1121 14:46:36.449765 2843875 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/apiserver.crt.78bb1819 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1121 14:46:37.246284 2843875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/apiserver.crt.78bb1819 ...
	I1121 14:46:37.246318 2843875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/apiserver.crt.78bb1819: {Name:mkcf688db387cf76c0d5ba22b7c31e12385c4418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:46:37.246491 2843875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/apiserver.key.78bb1819 ...
	I1121 14:46:37.246511 2843875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/apiserver.key.78bb1819: {Name:mkd2f362bf164b144bc910285230d554b2e7ebd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:46:37.246590 2843875 certs.go:382] copying /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/apiserver.crt.78bb1819 -> /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/apiserver.crt
	I1121 14:46:37.246676 2843875 certs.go:386] copying /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/apiserver.key.78bb1819 -> /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/apiserver.key
	I1121 14:46:37.246739 2843875 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/proxy-client.key
	I1121 14:46:37.246757 2843875 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/proxy-client.crt with IP's: []
	I1121 14:46:37.991881 2843875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/proxy-client.crt ...
	I1121 14:46:37.991913 2843875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/proxy-client.crt: {Name:mk751baab9333e8284a6eb2fdb2f2f3b200da788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:46:37.992854 2843875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/proxy-client.key ...
	I1121 14:46:37.992880 2843875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/proxy-client.key: {Name:mkb17b741c691928ceb9aa55ee605f0c11a03e8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:46:37.993129 2843875 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/2635785.pem (1338 bytes)
	W1121 14:46:37.993174 2843875 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/2635785_empty.pem, impossibly tiny 0 bytes
	I1121 14:46:37.993188 2843875 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:46:37.993213 2843875 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem (1082 bytes)
	I1121 14:46:37.993240 2843875 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:46:37.993267 2843875 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/key.pem (1679 bytes)
	I1121 14:46:37.993313 2843875 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem (1708 bytes)
	I1121 14:46:37.993989 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:46:38.017665 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 14:46:38.042525 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:46:38.064315 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:46:38.086528 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1121 14:46:38.106865 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:46:38.126635 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:46:38.148414 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:46:38.166333 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem --> /usr/share/ca-certificates/26357852.pem (1708 bytes)
	I1121 14:46:38.184300 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:46:38.201232 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/2635785.pem --> /usr/share/ca-certificates/2635785.pem (1338 bytes)
	I1121 14:46:38.221109 2843875 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:46:38.233395 2843875 ssh_runner.go:195] Run: openssl version
	I1121 14:46:38.240061 2843875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26357852.pem && ln -fs /usr/share/ca-certificates/26357852.pem /etc/ssl/certs/26357852.pem"
	I1121 14:46:38.248653 2843875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26357852.pem
	I1121 14:46:38.252805 2843875 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:03 /usr/share/ca-certificates/26357852.pem
	I1121 14:46:38.252871 2843875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26357852.pem
	I1121 14:46:38.294173 2843875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/26357852.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:46:38.302471 2843875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:46:38.310378 2843875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:46:38.314794 2843875 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:46:38.314858 2843875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:46:38.359280 2843875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:46:38.367577 2843875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2635785.pem && ln -fs /usr/share/ca-certificates/2635785.pem /etc/ssl/certs/2635785.pem"
	I1121 14:46:38.375534 2843875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2635785.pem
	I1121 14:46:38.379603 2843875 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:03 /usr/share/ca-certificates/2635785.pem
	I1121 14:46:38.379668 2843875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2635785.pem
	I1121 14:46:38.421240 2843875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2635785.pem /etc/ssl/certs/51391683.0"
	I1121 14:46:38.429436 2843875 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:46:38.433758 2843875 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:46:38.433820 2843875 kubeadm.go:401] StartCluster: {Name:no-preload-208006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-208006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:46:38.433903 2843875 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1121 14:46:38.433969 2843875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:46:38.475790 2843875 cri.go:89] found id: ""
	I1121 14:46:38.475869 2843875 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:46:38.488361 2843875 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:46:38.496890 2843875 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:46:38.496998 2843875 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:46:38.507982 2843875 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:46:38.508053 2843875 kubeadm.go:158] found existing configuration files:
	
	I1121 14:46:38.508138 2843875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 14:46:38.517535 2843875 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:46:38.517645 2843875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:46:38.525673 2843875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 14:46:38.534510 2843875 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:46:38.534619 2843875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:46:38.542849 2843875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 14:46:38.552074 2843875 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:46:38.552210 2843875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:46:38.560433 2843875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 14:46:38.569912 2843875 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:46:38.570027 2843875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:46:38.578389 2843875 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:46:38.624113 2843875 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 14:46:38.624518 2843875 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:46:38.682239 2843875 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:46:38.682407 2843875 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1121 14:46:38.682466 2843875 kubeadm.go:319] OS: Linux
	I1121 14:46:38.682519 2843875 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:46:38.682573 2843875 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1121 14:46:38.682630 2843875 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:46:38.682684 2843875 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:46:38.682738 2843875 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:46:38.682792 2843875 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:46:38.682842 2843875 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:46:38.682896 2843875 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:46:38.682948 2843875 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1121 14:46:38.835113 2843875 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:46:38.835332 2843875 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:46:38.835465 2843875 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 14:46:38.841455 2843875 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:46:38.844479 2843875 out.go:252]   - Generating certificates and keys ...
	I1121 14:46:38.844648 2843875 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:46:38.844773 2843875 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:46:38.986880 2843875 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:46:36.801550 2845792 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:46:37.300113 2845792 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:46:37.300725 2845792 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-695324 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1121 14:46:37.573380 2845792 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:46:37.573534 2845792 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-695324 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1121 14:46:38.012202 2845792 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:46:38.354114 2845792 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:46:39.663730 2845792 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:46:39.664292 2845792 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:46:39.843739 2845792 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:46:40.412778 2845792 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 14:46:40.974962 2845792 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:46:42.413412 2845792 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:46:43.329405 2845792 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:46:43.329511 2845792 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:46:43.329587 2845792 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:46:39.646289 2843875 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:46:40.177826 2843875 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:46:40.404903 2843875 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:46:40.485842 2843875 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:46:40.486344 2843875 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-208006] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1121 14:46:40.970120 2843875 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:46:40.970651 2843875 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-208006] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1121 14:46:41.920062 2843875 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:46:41.996475 2843875 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:46:42.331149 2843875 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:46:42.331740 2843875 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:46:43.048833 2843875 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:46:43.876713 2843875 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 14:46:44.366177 2843875 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:46:44.731453 2843875 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:46:45.041964 2843875 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:46:45.043256 2843875 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:46:45.065472 2843875 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:46:43.334296 2845792 out.go:252]   - Booting up control plane ...
	I1121 14:46:43.334418 2845792 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:46:43.334506 2845792 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:46:43.334582 2845792 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:46:43.348757 2845792 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:46:43.348869 2845792 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 14:46:43.358443 2845792 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 14:46:43.359996 2845792 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:46:43.363852 2845792 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:46:43.541412 2845792 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 14:46:43.541542 2845792 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 14:46:44.545397 2845792 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00163784s
	I1121 14:46:44.546513 2845792 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 14:46:44.546870 2845792 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1121 14:46:44.547189 2845792 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 14:46:44.547988 2845792 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 14:46:45.085426 2843875 out.go:252]   - Booting up control plane ...
	I1121 14:46:45.085562 2843875 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:46:45.085647 2843875 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:46:45.085721 2843875 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:46:45.116243 2843875 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:46:45.116371 2843875 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 14:46:45.129238 2843875 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 14:46:45.129667 2843875 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:46:45.129946 2843875 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:46:45.405428 2843875 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 14:46:45.405634 2843875 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 14:46:46.406009 2843875 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00129206s
	I1121 14:46:46.409857 2843875 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 14:46:46.409961 2843875 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1121 14:46:46.410210 2843875 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 14:46:46.410303 2843875 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 14:46:50.703193 2845792 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.154782345s
	I1121 14:46:51.315615 2843875 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.90296799s
	I1121 14:46:53.873519 2845792 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 9.324853511s
	I1121 14:46:56.051480 2845792 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 11.501883316s
	I1121 14:46:56.073087 2845792 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:46:56.099118 2845792 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:46:56.132494 2845792 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:46:56.133032 2845792 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-695324 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:46:56.159005 2845792 kubeadm.go:319] [bootstrap-token] Using token: a7ezg3.gdvjif9wl2df503w
	I1121 14:46:56.162048 2845792 out.go:252]   - Configuring RBAC rules ...
	I1121 14:46:56.162174 2845792 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:46:56.169431 2845792 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:46:56.180210 2845792 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:46:56.185881 2845792 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:46:56.192881 2845792 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:46:56.198337 2845792 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:46:55.162116 2843875 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 8.748659951s
	I1121 14:46:56.411632 2843875 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.001392218s
	I1121 14:46:56.432704 2843875 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:46:56.449085 2843875 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:46:56.477134 2843875 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:46:56.477354 2843875 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-208006 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:46:56.500677 2843875 kubeadm.go:319] [bootstrap-token] Using token: 2hh7sh.k2pmbohz9s00r858
	I1121 14:46:56.458698 2845792 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:46:56.974991 2845792 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:46:57.461652 2845792 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:46:57.469747 2845792 kubeadm.go:319] 
	I1121 14:46:57.469827 2845792 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:46:57.469834 2845792 kubeadm.go:319] 
	I1121 14:46:57.469914 2845792 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:46:57.469919 2845792 kubeadm.go:319] 
	I1121 14:46:57.469946 2845792 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:46:57.470017 2845792 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:46:57.470071 2845792 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:46:57.470075 2845792 kubeadm.go:319] 
	I1121 14:46:57.470131 2845792 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:46:57.470135 2845792 kubeadm.go:319] 
	I1121 14:46:57.470185 2845792 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:46:57.470190 2845792 kubeadm.go:319] 
	I1121 14:46:57.470244 2845792 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:46:57.470322 2845792 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:46:57.470393 2845792 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:46:57.470398 2845792 kubeadm.go:319] 
	I1121 14:46:57.470486 2845792 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:46:57.470566 2845792 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:46:57.470570 2845792 kubeadm.go:319] 
	I1121 14:46:57.470658 2845792 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token a7ezg3.gdvjif9wl2df503w \
	I1121 14:46:57.470765 2845792 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d756a1c258e082bbc06f965046f24233900a8e069c2a9d29a764f0b68af739ae \
	I1121 14:46:57.470786 2845792 kubeadm.go:319] 	--control-plane 
	I1121 14:46:57.470796 2845792 kubeadm.go:319] 
	I1121 14:46:57.470885 2845792 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:46:57.470889 2845792 kubeadm.go:319] 
	I1121 14:46:57.470974 2845792 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token a7ezg3.gdvjif9wl2df503w \
	I1121 14:46:57.471081 2845792 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d756a1c258e082bbc06f965046f24233900a8e069c2a9d29a764f0b68af739ae 
	I1121 14:46:57.486002 2845792 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1121 14:46:57.486322 2845792 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1121 14:46:57.486474 2845792 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:46:57.486513 2845792 cni.go:84] Creating CNI manager for ""
	I1121 14:46:57.486538 2845792 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:46:57.489717 2845792 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:46:56.503726 2843875 out.go:252]   - Configuring RBAC rules ...
	I1121 14:46:56.503862 2843875 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:46:56.514681 2843875 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:46:56.533128 2843875 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:46:56.539940 2843875 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:46:56.544263 2843875 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:46:56.549341 2843875 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:46:56.820115 2843875 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:46:57.250795 2843875 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:46:57.821421 2843875 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:46:57.822927 2843875 kubeadm.go:319] 
	I1121 14:46:57.823003 2843875 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:46:57.823009 2843875 kubeadm.go:319] 
	I1121 14:46:57.823090 2843875 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:46:57.823095 2843875 kubeadm.go:319] 
	I1121 14:46:57.823121 2843875 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:46:57.823183 2843875 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:46:57.823236 2843875 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:46:57.823240 2843875 kubeadm.go:319] 
	I1121 14:46:57.823296 2843875 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:46:57.823300 2843875 kubeadm.go:319] 
	I1121 14:46:57.823349 2843875 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:46:57.823354 2843875 kubeadm.go:319] 
	I1121 14:46:57.823408 2843875 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:46:57.823487 2843875 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:46:57.823558 2843875 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:46:57.823563 2843875 kubeadm.go:319] 
	I1121 14:46:57.823650 2843875 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:46:57.823730 2843875 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:46:57.823735 2843875 kubeadm.go:319] 
	I1121 14:46:57.823822 2843875 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 2hh7sh.k2pmbohz9s00r858 \
	I1121 14:46:57.823930 2843875 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d756a1c258e082bbc06f965046f24233900a8e069c2a9d29a764f0b68af739ae \
	I1121 14:46:57.823952 2843875 kubeadm.go:319] 	--control-plane 
	I1121 14:46:57.823956 2843875 kubeadm.go:319] 
	I1121 14:46:57.824044 2843875 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:46:57.824049 2843875 kubeadm.go:319] 
	I1121 14:46:57.824134 2843875 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2hh7sh.k2pmbohz9s00r858 \
	I1121 14:46:57.824241 2843875 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d756a1c258e082bbc06f965046f24233900a8e069c2a9d29a764f0b68af739ae 
	I1121 14:46:57.830307 2843875 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1121 14:46:57.830715 2843875 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1121 14:46:57.830867 2843875 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:46:57.830890 2843875 cni.go:84] Creating CNI manager for ""
	I1121 14:46:57.830898 2843875 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:46:57.836892 2843875 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:46:57.839919 2843875 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:46:57.862803 2843875 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 14:46:57.862823 2843875 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:46:57.956818 2843875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:46:58.717141 2843875 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:46:58.717233 2843875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:46:58.717289 2843875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-208006 minikube.k8s.io/updated_at=2025_11_21T14_46_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=no-preload-208006 minikube.k8s.io/primary=true
	I1121 14:46:58.947531 2843875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:46:58.947600 2843875 ops.go:34] apiserver oom_adj: -16
	I1121 14:46:59.448023 2843875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:46:57.492834 2845792 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:46:57.513679 2845792 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 14:46:57.513700 2845792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:46:57.594379 2845792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:46:58.110101 2845792 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:46:58.110236 2845792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:46:58.110301 2845792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-695324 minikube.k8s.io/updated_at=2025_11_21T14_46_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=embed-certs-695324 minikube.k8s.io/primary=true
	I1121 14:46:58.497272 2845792 ops.go:34] apiserver oom_adj: -16
	I1121 14:46:58.497375 2845792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:46:58.997498 2845792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:46:59.497758 2845792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:46:59.998242 2845792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:47:00.497973 2845792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:47:00.997486 2845792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:46:59.947630 2843875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:47:00.447609 2843875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:47:00.947719 2843875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:47:01.447714 2843875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:47:01.947630 2843875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:47:02.448441 2843875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:47:02.672711 2843875 kubeadm.go:1114] duration metric: took 3.955529478s to wait for elevateKubeSystemPrivileges
	I1121 14:47:02.672740 2843875 kubeadm.go:403] duration metric: took 24.238924391s to StartCluster
	I1121 14:47:02.672757 2843875 settings.go:142] acquiring lock: {Name:mkd6064915932eca5a3b1d70feb4ec8240f340da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:47:02.672827 2843875 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-2633933/kubeconfig
	I1121 14:47:02.673605 2843875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/kubeconfig: {Name:mkd905aaf74d26e32c0b3e46a7edfbf13f4b98ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:47:02.673864 2843875 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:47:02.674015 2843875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:47:02.674289 2843875 config.go:182] Loaded profile config "no-preload-208006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:47:02.674274 2843875 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:47:02.674358 2843875 addons.go:70] Setting storage-provisioner=true in profile "no-preload-208006"
	I1121 14:47:02.674374 2843875 addons.go:239] Setting addon storage-provisioner=true in "no-preload-208006"
	I1121 14:47:02.674402 2843875 host.go:66] Checking if "no-preload-208006" exists ...
	I1121 14:47:02.674902 2843875 cli_runner.go:164] Run: docker container inspect no-preload-208006 --format={{.State.Status}}
	I1121 14:47:02.675141 2843875 addons.go:70] Setting default-storageclass=true in profile "no-preload-208006"
	I1121 14:47:02.675163 2843875 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-208006"
	I1121 14:47:02.675419 2843875 cli_runner.go:164] Run: docker container inspect no-preload-208006 --format={{.State.Status}}
	I1121 14:47:02.677171 2843875 out.go:179] * Verifying Kubernetes components...
	I1121 14:47:02.680137 2843875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:47:02.730022 2843875 addons.go:239] Setting addon default-storageclass=true in "no-preload-208006"
	I1121 14:47:02.730083 2843875 host.go:66] Checking if "no-preload-208006" exists ...
	I1121 14:47:02.730572 2843875 cli_runner.go:164] Run: docker container inspect no-preload-208006 --format={{.State.Status}}
	I1121 14:47:02.735133 2843875 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:47:01.498365 2845792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:47:01.998241 2845792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:47:02.498185 2845792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:47:02.998092 2845792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:47:03.413810 2845792 kubeadm.go:1114] duration metric: took 5.303623466s to wait for elevateKubeSystemPrivileges
	I1121 14:47:03.413860 2845792 kubeadm.go:403] duration metric: took 29.758567682s to StartCluster
	I1121 14:47:03.413879 2845792 settings.go:142] acquiring lock: {Name:mkd6064915932eca5a3b1d70feb4ec8240f340da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:47:03.413962 2845792 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-2633933/kubeconfig
	I1121 14:47:03.415375 2845792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/kubeconfig: {Name:mkd905aaf74d26e32c0b3e46a7edfbf13f4b98ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:47:03.415643 2845792 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:47:03.415854 2845792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:47:03.416140 2845792 config.go:182] Loaded profile config "embed-certs-695324": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:47:03.416182 2845792 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:47:03.416255 2845792 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-695324"
	I1121 14:47:03.416272 2845792 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-695324"
	I1121 14:47:03.416292 2845792 host.go:66] Checking if "embed-certs-695324" exists ...
	I1121 14:47:03.417160 2845792 cli_runner.go:164] Run: docker container inspect embed-certs-695324 --format={{.State.Status}}
	I1121 14:47:03.417339 2845792 addons.go:70] Setting default-storageclass=true in profile "embed-certs-695324"
	I1121 14:47:03.417357 2845792 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-695324"
	I1121 14:47:03.417650 2845792 cli_runner.go:164] Run: docker container inspect embed-certs-695324 --format={{.State.Status}}
	I1121 14:47:03.421273 2845792 out.go:179] * Verifying Kubernetes components...
	I1121 14:47:03.429557 2845792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:47:03.458436 2845792 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:47:02.738100 2843875 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:47:02.738123 2843875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:47:02.738190 2843875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:47:02.775748 2843875 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:47:02.775786 2843875 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:47:02.775868 2843875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:47:02.780929 2843875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36730 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/no-preload-208006/id_rsa Username:docker}
	I1121 14:47:02.804810 2843875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36730 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/no-preload-208006/id_rsa Username:docker}
	I1121 14:47:03.341268 2843875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:47:03.535040 2843875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:47:03.571630 2843875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:47:03.571761 2843875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:47:05.389096 2843875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.854021552s)
	I1121 14:47:05.389275 2843875 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.817499753s)
	I1121 14:47:05.396345 2843875 node_ready.go:35] waiting up to 6m0s for node "no-preload-208006" to be "Ready" ...
	I1121 14:47:05.389291 2843875 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.817636955s)
	I1121 14:47:05.396590 2843875 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1121 14:47:05.400761 2843875 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1121 14:47:03.463027 2845792 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:47:03.463049 2845792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:47:03.463115 2845792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-695324
	I1121 14:47:03.470965 2845792 addons.go:239] Setting addon default-storageclass=true in "embed-certs-695324"
	I1121 14:47:03.471020 2845792 host.go:66] Checking if "embed-certs-695324" exists ...
	I1121 14:47:03.471503 2845792 cli_runner.go:164] Run: docker container inspect embed-certs-695324 --format={{.State.Status}}
	I1121 14:47:03.505145 2845792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36735 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/embed-certs-695324/id_rsa Username:docker}
	I1121 14:47:03.512701 2845792 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:47:03.512724 2845792 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:47:03.512788 2845792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-695324
	I1121 14:47:03.539987 2845792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36735 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/embed-certs-695324/id_rsa Username:docker}
	I1121 14:47:04.239229 2845792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:47:04.331791 2845792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:47:04.332015 2845792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:47:04.676238 2845792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:47:05.928994 2845792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.689677774s)
	I1121 14:47:05.929075 2845792 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.597016543s)
	I1121 14:47:05.930142 2845792 node_ready.go:35] waiting up to 6m0s for node "embed-certs-695324" to be "Ready" ...
	I1121 14:47:05.930450 2845792 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.59855052s)
	I1121 14:47:05.930479 2845792 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1121 14:47:05.931701 2845792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.255385246s)
	I1121 14:47:05.973852 2845792 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1121 14:47:05.976766 2845792 addons.go:530] duration metric: took 2.560566265s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:47:05.403839 2843875 addons.go:530] duration metric: took 2.729549073s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1121 14:47:05.902198 2843875 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-208006" context rescaled to 1 replicas
	W1121 14:47:07.399560 2843875 node_ready.go:57] node "no-preload-208006" has "Ready":"False" status (will retry)
	I1121 14:47:06.435484 2845792 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-695324" context rescaled to 1 replicas
	W1121 14:47:07.933728 2845792 node_ready.go:57] node "embed-certs-695324" has "Ready":"False" status (will retry)
	W1121 14:47:10.433881 2845792 node_ready.go:57] node "embed-certs-695324" has "Ready":"False" status (will retry)
	W1121 14:47:09.899343 2843875 node_ready.go:57] node "no-preload-208006" has "Ready":"False" status (will retry)
	W1121 14:47:12.399222 2843875 node_ready.go:57] node "no-preload-208006" has "Ready":"False" status (will retry)
	W1121 14:47:12.434340 2845792 node_ready.go:57] node "embed-certs-695324" has "Ready":"False" status (will retry)
	W1121 14:47:14.933915 2845792 node_ready.go:57] node "embed-certs-695324" has "Ready":"False" status (will retry)
	W1121 14:47:14.899208 2843875 node_ready.go:57] node "no-preload-208006" has "Ready":"False" status (will retry)
	W1121 14:47:17.399141 2843875 node_ready.go:57] node "no-preload-208006" has "Ready":"False" status (will retry)
	I1121 14:47:17.899233 2843875 node_ready.go:49] node "no-preload-208006" is "Ready"
	I1121 14:47:17.899262 2843875 node_ready.go:38] duration metric: took 12.50288338s for node "no-preload-208006" to be "Ready" ...
	I1121 14:47:17.899276 2843875 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:47:17.899331 2843875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:47:17.919686 2843875 api_server.go:72] duration metric: took 15.24579215s to wait for apiserver process to appear ...
	I1121 14:47:17.919707 2843875 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:47:17.919728 2843875 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:47:17.927711 2843875 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1121 14:47:17.928802 2843875 api_server.go:141] control plane version: v1.34.1
	I1121 14:47:17.928833 2843875 api_server.go:131] duration metric: took 9.118631ms to wait for apiserver health ...
	I1121 14:47:17.928843 2843875 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:47:17.932252 2843875 system_pods.go:59] 8 kube-system pods found
	I1121 14:47:17.932284 2843875 system_pods.go:61] "coredns-66bc5c9577-685tb" [6803e67a-8098-45df-8806-57553f15a42b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:47:17.932290 2843875 system_pods.go:61] "etcd-no-preload-208006" [188d36d7-4f3c-4063-a524-ce832267b9a2] Running
	I1121 14:47:17.932297 2843875 system_pods.go:61] "kindnet-kcbj5" [60909558-fb73-4b08-a011-4d60cb8d5564] Running
	I1121 14:47:17.932302 2843875 system_pods.go:61] "kube-apiserver-no-preload-208006" [f3226245-c329-4a92-809f-db513eb9b685] Running
	I1121 14:47:17.932307 2843875 system_pods.go:61] "kube-controller-manager-no-preload-208006" [d70f56da-89fb-4e35-ad0a-5aaa6446b3da] Running
	I1121 14:47:17.932310 2843875 system_pods.go:61] "kube-proxy-9xgd7" [47e3faf6-8bcc-48ab-a34a-df6769a2ca1b] Running
	I1121 14:47:17.932314 2843875 system_pods.go:61] "kube-scheduler-no-preload-208006" [ea0ab1f0-7cf1-4599-b29c-473acbcfe4d0] Running
	I1121 14:47:17.932320 2843875 system_pods.go:61] "storage-provisioner" [8bb32e63-0669-499e-81f0-9e79f31c0762] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:47:17.932325 2843875 system_pods.go:74] duration metric: took 3.477176ms to wait for pod list to return data ...
	I1121 14:47:17.932333 2843875 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:47:17.935930 2843875 default_sa.go:45] found service account: "default"
	I1121 14:47:17.935955 2843875 default_sa.go:55] duration metric: took 3.616118ms for default service account to be created ...
	I1121 14:47:17.935965 2843875 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:47:17.939105 2843875 system_pods.go:86] 8 kube-system pods found
	I1121 14:47:17.939139 2843875 system_pods.go:89] "coredns-66bc5c9577-685tb" [6803e67a-8098-45df-8806-57553f15a42b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:47:17.939146 2843875 system_pods.go:89] "etcd-no-preload-208006" [188d36d7-4f3c-4063-a524-ce832267b9a2] Running
	I1121 14:47:17.939152 2843875 system_pods.go:89] "kindnet-kcbj5" [60909558-fb73-4b08-a011-4d60cb8d5564] Running
	I1121 14:47:17.939157 2843875 system_pods.go:89] "kube-apiserver-no-preload-208006" [f3226245-c329-4a92-809f-db513eb9b685] Running
	I1121 14:47:17.939166 2843875 system_pods.go:89] "kube-controller-manager-no-preload-208006" [d70f56da-89fb-4e35-ad0a-5aaa6446b3da] Running
	I1121 14:47:17.939171 2843875 system_pods.go:89] "kube-proxy-9xgd7" [47e3faf6-8bcc-48ab-a34a-df6769a2ca1b] Running
	I1121 14:47:17.939177 2843875 system_pods.go:89] "kube-scheduler-no-preload-208006" [ea0ab1f0-7cf1-4599-b29c-473acbcfe4d0] Running
	I1121 14:47:17.939183 2843875 system_pods.go:89] "storage-provisioner" [8bb32e63-0669-499e-81f0-9e79f31c0762] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:47:17.939214 2843875 retry.go:31] will retry after 197.430621ms: missing components: kube-dns
	I1121 14:47:18.144359 2843875 system_pods.go:86] 8 kube-system pods found
	I1121 14:47:18.144398 2843875 system_pods.go:89] "coredns-66bc5c9577-685tb" [6803e67a-8098-45df-8806-57553f15a42b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:47:18.144406 2843875 system_pods.go:89] "etcd-no-preload-208006" [188d36d7-4f3c-4063-a524-ce832267b9a2] Running
	I1121 14:47:18.144413 2843875 system_pods.go:89] "kindnet-kcbj5" [60909558-fb73-4b08-a011-4d60cb8d5564] Running
	I1121 14:47:18.144421 2843875 system_pods.go:89] "kube-apiserver-no-preload-208006" [f3226245-c329-4a92-809f-db513eb9b685] Running
	I1121 14:47:18.144427 2843875 system_pods.go:89] "kube-controller-manager-no-preload-208006" [d70f56da-89fb-4e35-ad0a-5aaa6446b3da] Running
	I1121 14:47:18.144430 2843875 system_pods.go:89] "kube-proxy-9xgd7" [47e3faf6-8bcc-48ab-a34a-df6769a2ca1b] Running
	I1121 14:47:18.144434 2843875 system_pods.go:89] "kube-scheduler-no-preload-208006" [ea0ab1f0-7cf1-4599-b29c-473acbcfe4d0] Running
	I1121 14:47:18.144444 2843875 system_pods.go:89] "storage-provisioner" [8bb32e63-0669-499e-81f0-9e79f31c0762] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:47:18.144459 2843875 retry.go:31] will retry after 339.966672ms: missing components: kube-dns
	I1121 14:47:18.489144 2843875 system_pods.go:86] 8 kube-system pods found
	I1121 14:47:18.489185 2843875 system_pods.go:89] "coredns-66bc5c9577-685tb" [6803e67a-8098-45df-8806-57553f15a42b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:47:18.489193 2843875 system_pods.go:89] "etcd-no-preload-208006" [188d36d7-4f3c-4063-a524-ce832267b9a2] Running
	I1121 14:47:18.489200 2843875 system_pods.go:89] "kindnet-kcbj5" [60909558-fb73-4b08-a011-4d60cb8d5564] Running
	I1121 14:47:18.489207 2843875 system_pods.go:89] "kube-apiserver-no-preload-208006" [f3226245-c329-4a92-809f-db513eb9b685] Running
	I1121 14:47:18.489220 2843875 system_pods.go:89] "kube-controller-manager-no-preload-208006" [d70f56da-89fb-4e35-ad0a-5aaa6446b3da] Running
	I1121 14:47:18.489229 2843875 system_pods.go:89] "kube-proxy-9xgd7" [47e3faf6-8bcc-48ab-a34a-df6769a2ca1b] Running
	I1121 14:47:18.489233 2843875 system_pods.go:89] "kube-scheduler-no-preload-208006" [ea0ab1f0-7cf1-4599-b29c-473acbcfe4d0] Running
	I1121 14:47:18.489244 2843875 system_pods.go:89] "storage-provisioner" [8bb32e63-0669-499e-81f0-9e79f31c0762] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:47:18.489267 2843875 retry.go:31] will retry after 358.331251ms: missing components: kube-dns
	I1121 14:47:18.851664 2843875 system_pods.go:86] 8 kube-system pods found
	I1121 14:47:18.851746 2843875 system_pods.go:89] "coredns-66bc5c9577-685tb" [6803e67a-8098-45df-8806-57553f15a42b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:47:18.851763 2843875 system_pods.go:89] "etcd-no-preload-208006" [188d36d7-4f3c-4063-a524-ce832267b9a2] Running
	I1121 14:47:18.851770 2843875 system_pods.go:89] "kindnet-kcbj5" [60909558-fb73-4b08-a011-4d60cb8d5564] Running
	I1121 14:47:18.851774 2843875 system_pods.go:89] "kube-apiserver-no-preload-208006" [f3226245-c329-4a92-809f-db513eb9b685] Running
	I1121 14:47:18.851779 2843875 system_pods.go:89] "kube-controller-manager-no-preload-208006" [d70f56da-89fb-4e35-ad0a-5aaa6446b3da] Running
	I1121 14:47:18.851783 2843875 system_pods.go:89] "kube-proxy-9xgd7" [47e3faf6-8bcc-48ab-a34a-df6769a2ca1b] Running
	I1121 14:47:18.851787 2843875 system_pods.go:89] "kube-scheduler-no-preload-208006" [ea0ab1f0-7cf1-4599-b29c-473acbcfe4d0] Running
	I1121 14:47:18.851792 2843875 system_pods.go:89] "storage-provisioner" [8bb32e63-0669-499e-81f0-9e79f31c0762] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:47:18.851810 2843875 system_pods.go:126] duration metric: took 915.838215ms to wait for k8s-apps to be running ...
	I1121 14:47:18.851824 2843875 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:47:18.851895 2843875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:47:18.869665 2843875 system_svc.go:56] duration metric: took 17.830478ms WaitForService to wait for kubelet
	I1121 14:47:18.869695 2843875 kubeadm.go:587] duration metric: took 16.195805917s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:47:18.869714 2843875 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:47:18.899947 2843875 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 14:47:18.899983 2843875 node_conditions.go:123] node cpu capacity is 2
	I1121 14:47:18.899998 2843875 node_conditions.go:105] duration metric: took 30.27319ms to run NodePressure ...
	I1121 14:47:18.900018 2843875 start.go:242] waiting for startup goroutines ...
	I1121 14:47:18.900035 2843875 start.go:247] waiting for cluster config update ...
	I1121 14:47:18.900047 2843875 start.go:256] writing updated cluster config ...
	I1121 14:47:18.900334 2843875 ssh_runner.go:195] Run: rm -f paused
	I1121 14:47:18.905576 2843875 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:47:18.925377 2843875 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-685tb" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:47:18.941379 2843875 pod_ready.go:94] pod "coredns-66bc5c9577-685tb" is "Ready"
	I1121 14:47:18.941411 2843875 pod_ready.go:86] duration metric: took 16.002584ms for pod "coredns-66bc5c9577-685tb" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:47:18.947135 2843875 pod_ready.go:83] waiting for pod "etcd-no-preload-208006" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:47:18.955621 2843875 pod_ready.go:94] pod "etcd-no-preload-208006" is "Ready"
	I1121 14:47:18.955649 2843875 pod_ready.go:86] duration metric: took 8.487092ms for pod "etcd-no-preload-208006" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:47:19.019716 2843875 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-208006" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:47:19.024921 2843875 pod_ready.go:94] pod "kube-apiserver-no-preload-208006" is "Ready"
	I1121 14:47:19.024950 2843875 pod_ready.go:86] duration metric: took 5.204379ms for pod "kube-apiserver-no-preload-208006" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:47:19.027589 2843875 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-208006" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:47:19.310481 2843875 pod_ready.go:94] pod "kube-controller-manager-no-preload-208006" is "Ready"
	I1121 14:47:19.310509 2843875 pod_ready.go:86] duration metric: took 282.895643ms for pod "kube-controller-manager-no-preload-208006" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:47:19.510554 2843875 pod_ready.go:83] waiting for pod "kube-proxy-9xgd7" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:47:19.909588 2843875 pod_ready.go:94] pod "kube-proxy-9xgd7" is "Ready"
	I1121 14:47:19.909617 2843875 pod_ready.go:86] duration metric: took 399.034288ms for pod "kube-proxy-9xgd7" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:47:20.110190 2843875 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-208006" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:47:20.509705 2843875 pod_ready.go:94] pod "kube-scheduler-no-preload-208006" is "Ready"
	I1121 14:47:20.509736 2843875 pod_ready.go:86] duration metric: took 399.520691ms for pod "kube-scheduler-no-preload-208006" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:47:20.509749 2843875 pod_ready.go:40] duration metric: took 1.604141714s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:47:20.570644 2843875 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1121 14:47:20.573889 2843875 out.go:179] * Done! kubectl is now configured to use "no-preload-208006" cluster and "default" namespace by default
	W1121 14:47:17.433813 2845792 node_ready.go:57] node "embed-certs-695324" has "Ready":"False" status (will retry)
	W1121 14:47:19.933958 2845792 node_ready.go:57] node "embed-certs-695324" has "Ready":"False" status (will retry)
	W1121 14:47:22.434325 2845792 node_ready.go:57] node "embed-certs-695324" has "Ready":"False" status (will retry)
	W1121 14:47:24.932758 2845792 node_ready.go:57] node "embed-certs-695324" has "Ready":"False" status (will retry)
	W1121 14:47:26.933974 2845792 node_ready.go:57] node "embed-certs-695324" has "Ready":"False" status (will retry)
	W1121 14:47:29.432964 2845792 node_ready.go:57] node "embed-certs-695324" has "Ready":"False" status (will retry)
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	26337959306e0       1611cd07b61d5       8 seconds ago       Running             busybox                   0                   0bc74ef149db8       busybox                                     default
	9a1d80f65a499       138784d87c9c5       13 seconds ago      Running             coredns                   0                   82f350a64c951       coredns-66bc5c9577-685tb                    kube-system
	b011fda5b154d       66749159455b3       13 seconds ago      Running             storage-provisioner       0                   0de8a2721d3b6       storage-provisioner                         kube-system
	f3dd66e01305a       b1a8c6f707935       24 seconds ago      Running             kindnet-cni               0                   5cdfbfdd8455a       kindnet-kcbj5                               kube-system
	7e57e7c8851a9       05baa95f5142d       27 seconds ago      Running             kube-proxy                0                   0272c44eda3b8       kube-proxy-9xgd7                            kube-system
	05bfdef30141a       b5f57ec6b9867       44 seconds ago      Running             kube-scheduler            0                   253d1c4b4cc62       kube-scheduler-no-preload-208006            kube-system
	e51ffcbc830b0       7eb2c6ff0c5a7       44 seconds ago      Running             kube-controller-manager   0                   4139c67d6c6dd       kube-controller-manager-no-preload-208006   kube-system
	670da2ec0c5a2       43911e833d64d       45 seconds ago      Running             kube-apiserver            0                   a533469dde9ca       kube-apiserver-no-preload-208006            kube-system
	8f30bcc0ffef6       a1894772a478e       45 seconds ago      Running             etcd                      0                   ace5ff2dd9929       etcd-no-preload-208006                      kube-system
	
	
	==> containerd <==
	Nov 21 14:47:18 no-preload-208006 containerd[760]: time="2025-11-21T14:47:18.197559859Z" level=info msg="CreateContainer within sandbox \"82f350a64c9516ca00bcab3002194bfe71c1b684cb5d2f4ad5470174cb5e3bd8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 21 14:47:18 no-preload-208006 containerd[760]: time="2025-11-21T14:47:18.199372795Z" level=info msg="StartContainer for \"b011fda5b154db9e2a0ab88b094fd30c123607858f7144eb0187d89bc6c74ac9\""
	Nov 21 14:47:18 no-preload-208006 containerd[760]: time="2025-11-21T14:47:18.200465014Z" level=info msg="connecting to shim b011fda5b154db9e2a0ab88b094fd30c123607858f7144eb0187d89bc6c74ac9" address="unix:///run/containerd/s/09f659852cedc7fa9b63e78a0785442734cb68d6f5faad8529006b1f2ee0c3b5" protocol=ttrpc version=3
	Nov 21 14:47:18 no-preload-208006 containerd[760]: time="2025-11-21T14:47:18.213625498Z" level=info msg="Container 9a1d80f65a499b5e66dbd87f28482a42a0845efa7b882a7e675d26b780ed885b: CDI devices from CRI Config.CDIDevices: []"
	Nov 21 14:47:18 no-preload-208006 containerd[760]: time="2025-11-21T14:47:18.245218362Z" level=info msg="CreateContainer within sandbox \"82f350a64c9516ca00bcab3002194bfe71c1b684cb5d2f4ad5470174cb5e3bd8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9a1d80f65a499b5e66dbd87f28482a42a0845efa7b882a7e675d26b780ed885b\""
	Nov 21 14:47:18 no-preload-208006 containerd[760]: time="2025-11-21T14:47:18.248711743Z" level=info msg="StartContainer for \"9a1d80f65a499b5e66dbd87f28482a42a0845efa7b882a7e675d26b780ed885b\""
	Nov 21 14:47:18 no-preload-208006 containerd[760]: time="2025-11-21T14:47:18.253780873Z" level=info msg="connecting to shim 9a1d80f65a499b5e66dbd87f28482a42a0845efa7b882a7e675d26b780ed885b" address="unix:///run/containerd/s/a26a29e70ca1d73b1909ba5807d7fc1f4c04ee12306bf6c5dae092582b93c1db" protocol=ttrpc version=3
	Nov 21 14:47:18 no-preload-208006 containerd[760]: time="2025-11-21T14:47:18.280939363Z" level=info msg="StartContainer for \"b011fda5b154db9e2a0ab88b094fd30c123607858f7144eb0187d89bc6c74ac9\" returns successfully"
	Nov 21 14:47:18 no-preload-208006 containerd[760]: time="2025-11-21T14:47:18.353652507Z" level=info msg="StartContainer for \"9a1d80f65a499b5e66dbd87f28482a42a0845efa7b882a7e675d26b780ed885b\" returns successfully"
	Nov 21 14:47:21 no-preload-208006 containerd[760]: time="2025-11-21T14:47:21.109017599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:0ea2f04f-bf79-4916-b7c0-6bb05a5c87d0,Namespace:default,Attempt:0,}"
	Nov 21 14:47:21 no-preload-208006 containerd[760]: time="2025-11-21T14:47:21.173965337Z" level=info msg="connecting to shim 0bc74ef149db867a422be0c95f9362b93b20c2265c9e527905068e0c3de37e4e" address="unix:///run/containerd/s/790cf03eccdd35758c66551a1d2548db57ab7665489dfecec5725dcd83c847e3" namespace=k8s.io protocol=ttrpc version=3
	Nov 21 14:47:21 no-preload-208006 containerd[760]: time="2025-11-21T14:47:21.233464780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:0ea2f04f-bf79-4916-b7c0-6bb05a5c87d0,Namespace:default,Attempt:0,} returns sandbox id \"0bc74ef149db867a422be0c95f9362b93b20c2265c9e527905068e0c3de37e4e\""
	Nov 21 14:47:21 no-preload-208006 containerd[760]: time="2025-11-21T14:47:21.238759061Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 21 14:47:23 no-preload-208006 containerd[760]: time="2025-11-21T14:47:23.372986515Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:47:23 no-preload-208006 containerd[760]: time="2025-11-21T14:47:23.374885103Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937186"
	Nov 21 14:47:23 no-preload-208006 containerd[760]: time="2025-11-21T14:47:23.377148307Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:47:23 no-preload-208006 containerd[760]: time="2025-11-21T14:47:23.380164828Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:47:23 no-preload-208006 containerd[760]: time="2025-11-21T14:47:23.380917044Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.1419312s"
	Nov 21 14:47:23 no-preload-208006 containerd[760]: time="2025-11-21T14:47:23.380960760Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 21 14:47:23 no-preload-208006 containerd[760]: time="2025-11-21T14:47:23.388873533Z" level=info msg="CreateContainer within sandbox \"0bc74ef149db867a422be0c95f9362b93b20c2265c9e527905068e0c3de37e4e\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 21 14:47:23 no-preload-208006 containerd[760]: time="2025-11-21T14:47:23.405911354Z" level=info msg="Container 26337959306e0f5e739c563b1f6adef1693f017dfb0173ccd8f364de6dcb6412: CDI devices from CRI Config.CDIDevices: []"
	Nov 21 14:47:23 no-preload-208006 containerd[760]: time="2025-11-21T14:47:23.418460015Z" level=info msg="CreateContainer within sandbox \"0bc74ef149db867a422be0c95f9362b93b20c2265c9e527905068e0c3de37e4e\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"26337959306e0f5e739c563b1f6adef1693f017dfb0173ccd8f364de6dcb6412\""
	Nov 21 14:47:23 no-preload-208006 containerd[760]: time="2025-11-21T14:47:23.419058512Z" level=info msg="StartContainer for \"26337959306e0f5e739c563b1f6adef1693f017dfb0173ccd8f364de6dcb6412\""
	Nov 21 14:47:23 no-preload-208006 containerd[760]: time="2025-11-21T14:47:23.419991991Z" level=info msg="connecting to shim 26337959306e0f5e739c563b1f6adef1693f017dfb0173ccd8f364de6dcb6412" address="unix:///run/containerd/s/790cf03eccdd35758c66551a1d2548db57ab7665489dfecec5725dcd83c847e3" protocol=ttrpc version=3
	Nov 21 14:47:23 no-preload-208006 containerd[760]: time="2025-11-21T14:47:23.497472062Z" level=info msg="StartContainer for \"26337959306e0f5e739c563b1f6adef1693f017dfb0173ccd8f364de6dcb6412\" returns successfully"
	
	
	==> coredns [9a1d80f65a499b5e66dbd87f28482a42a0845efa7b882a7e675d26b780ed885b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46660 - 36110 "HINFO IN 6973837720917439497.4433172359432745824. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027940457s
	
	
	==> describe nodes <==
	Name:               no-preload-208006
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-208006
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=no-preload-208006
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_46_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:46:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-208006
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:47:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:47:28 +0000   Fri, 21 Nov 2025 14:46:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:47:28 +0000   Fri, 21 Nov 2025 14:46:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:47:28 +0000   Fri, 21 Nov 2025 14:46:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:47:28 +0000   Fri, 21 Nov 2025 14:47:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-208006
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                f039ed5e-2d51-4016-b933-b720b8535aa9
	  Boot ID:                    41b0e09d-5a9a-49c9-8980-dca608ba3fce
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-685tb                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-no-preload-208006                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-kcbj5                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-no-preload-208006             250m (12%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-no-preload-208006    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-9xgd7                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-no-preload-208006             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 26s   kube-proxy       
	  Normal   Starting                 35s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 35s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  35s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  34s   kubelet          Node no-preload-208006 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    34s   kubelet          Node no-preload-208006 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     34s   kubelet          Node no-preload-208006 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           30s   node-controller  Node no-preload-208006 event: Registered Node no-preload-208006 in Controller
	  Normal   NodeReady                15s   kubelet          Node no-preload-208006 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 13:02] overlayfs: idmapped layers are currently not supported
	[Nov21 13:03] overlayfs: idmapped layers are currently not supported
	[Nov21 13:06] overlayfs: idmapped layers are currently not supported
	[Nov21 13:08] overlayfs: idmapped layers are currently not supported
	[Nov21 13:09] overlayfs: idmapped layers are currently not supported
	[Nov21 13:10] overlayfs: idmapped layers are currently not supported
	[ +19.808801] overlayfs: idmapped layers are currently not supported
	[Nov21 13:11] overlayfs: idmapped layers are currently not supported
	[Nov21 13:12] overlayfs: idmapped layers are currently not supported
	[Nov21 13:13] overlayfs: idmapped layers are currently not supported
	[Nov21 13:14] overlayfs: idmapped layers are currently not supported
	[Nov21 13:15] overlayfs: idmapped layers are currently not supported
	[ +16.772572] overlayfs: idmapped layers are currently not supported
	[Nov21 13:16] overlayfs: idmapped layers are currently not supported
	[Nov21 13:17] overlayfs: idmapped layers are currently not supported
	[ +27.396777] overlayfs: idmapped layers are currently not supported
	[Nov21 13:18] overlayfs: idmapped layers are currently not supported
	[ +25.430119] overlayfs: idmapped layers are currently not supported
	[Nov21 13:19] overlayfs: idmapped layers are currently not supported
	[Nov21 13:20] overlayfs: idmapped layers are currently not supported
	[Nov21 13:21] overlayfs: idmapped layers are currently not supported
	[Nov21 13:22] overlayfs: idmapped layers are currently not supported
	[Nov21 13:23] overlayfs: idmapped layers are currently not supported
	[Nov21 13:24] overlayfs: idmapped layers are currently not supported
	[Nov21 13:55] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [8f30bcc0ffef68f33676c531a54c185943fd5843eeb062e2a7a47fc41ccff421] <==
	{"level":"warn","ts":"2025-11-21T14:46:52.265097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.291431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.319419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.357227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.390835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.402731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.424772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.457429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.505766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.533825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.561751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.592001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.618663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.663395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.681496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.702665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.725845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.745310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.762325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.782925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.800021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.813638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.839382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:53.007710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60644","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-21T14:47:03.837350Z","caller":"traceutil/trace.go:172","msg":"trace[1449625866] transaction","detail":"{read_only:false; response_revision:360; number_of_response:1; }","duration":"106.117683ms","start":"2025-11-21T14:47:03.731216Z","end":"2025-11-21T14:47:03.837334Z","steps":["trace[1449625866] 'process raft request'  (duration: 96.618427ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:47:32 up 19:30,  0 user,  load average: 4.41, 3.50, 2.94
	Linux no-preload-208006 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f3dd66e01305aa67da4fef766c626727d676c7ffe74473a1010270d904b974d1] <==
	I1121 14:47:07.423627       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:47:07.424602       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1121 14:47:07.426107       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:47:07.426264       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:47:07.426369       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:47:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:47:07.629595       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:47:07.629775       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:47:07.629851       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:47:07.631056       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1121 14:47:07.930389       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:47:07.930501       1 metrics.go:72] Registering metrics
	I1121 14:47:07.930644       1 controller.go:711] "Syncing nftables rules"
	I1121 14:47:17.637136       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:47:17.637190       1 main.go:301] handling current node
	I1121 14:47:27.629095       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:47:27.629130       1 main.go:301] handling current node
	
	
	==> kube-apiserver [670da2ec0c5a22405cd819ddba5cacc0165673f1fa923b5507091c8767428c9e] <==
	I1121 14:46:54.476844       1 controller.go:667] quota admission added evaluator for: namespaces
	E1121 14:46:54.483547       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1121 14:46:54.537600       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:46:54.539719       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1121 14:46:54.548534       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:46:54.548815       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 14:46:54.703050       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:46:55.076566       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1121 14:46:55.093359       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1121 14:46:55.093583       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:46:56.227298       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:46:56.305785       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:46:56.386601       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1121 14:46:56.394664       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1121 14:46:56.395983       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 14:46:56.401589       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:46:57.221291       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:46:57.228300       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 14:46:57.244684       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1121 14:46:57.256479       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1121 14:47:03.211276       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1121 14:47:03.407397       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:47:03.414140       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 14:47:03.581659       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1121 14:47:30.955479       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:49786: use of closed network connection
	
	
	==> kube-controller-manager [e51ffcbc830b08843be90ae4a5cbc20e3b6d6721e6d01983023416c9a7ebff67] <==
	I1121 14:47:02.340990       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-208006"
	I1121 14:47:02.341063       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1121 14:47:02.341122       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1121 14:47:02.341167       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1121 14:47:02.353345       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 14:47:02.357439       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:47:02.361191       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1121 14:47:02.362665       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1121 14:47:02.362688       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1121 14:47:02.362791       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1121 14:47:02.362810       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1121 14:47:02.362828       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1121 14:47:02.363161       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-208006" podCIDRs=["10.244.0.0/24"]
	I1121 14:47:02.363466       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:47:02.363519       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1121 14:47:02.363532       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1121 14:47:02.363556       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1121 14:47:02.363588       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 14:47:02.363597       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1121 14:47:02.363609       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1121 14:47:02.363622       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1121 14:47:02.363654       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 14:47:02.375264       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1121 14:47:18.902229       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I1121 14:47:22.343852       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [7e57e7c8851a9cc8ab9aae48190e5273f29aca6479946be08dd8ce6aae53eae4] <==
	I1121 14:47:05.018304       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:47:05.134803       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:47:05.240791       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:47:05.240841       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1121 14:47:05.240934       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:47:05.568945       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:47:05.569237       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:47:05.677217       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:47:05.677568       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:47:05.677588       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:47:05.682439       1 config.go:200] "Starting service config controller"
	I1121 14:47:05.682500       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:47:05.683699       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:47:05.683716       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:47:05.683900       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:47:05.683911       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:47:05.728327       1 config.go:309] "Starting node config controller"
	I1121 14:47:05.728346       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:47:05.728381       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:47:05.783427       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 14:47:05.784589       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 14:47:05.784633       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [05bfdef30141a8e21622a5df2d0b5fad2030cdf0b24ad8c65c35f99be64b97da] <==
	I1121 14:46:55.117337       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:46:55.124337       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1121 14:46:55.130702       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:46:55.130992       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:46:55.131144       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1121 14:46:55.158936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 14:46:55.159314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 14:46:55.159506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 14:46:55.159738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 14:46:55.159925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 14:46:55.160395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 14:46:55.160605       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 14:46:55.160797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 14:46:55.160977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:46:55.161170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 14:46:55.161339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 14:46:55.161498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 14:46:55.161762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 14:46:55.161964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 14:46:55.162234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 14:46:55.162392       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 14:46:55.162542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 14:46:55.162641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 14:46:55.163429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1121 14:46:56.631723       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:46:58 no-preload-208006 kubelet[2080]: I1121 14:46:58.436313    2080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-208006" podStartSLOduration=2.435938638 podStartE2EDuration="2.435938638s" podCreationTimestamp="2025-11-21 14:46:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:46:58.435761347 +0000 UTC m=+1.271249544" watchObservedRunningTime="2025-11-21 14:46:58.435938638 +0000 UTC m=+1.271426835"
	Nov 21 14:46:58 no-preload-208006 kubelet[2080]: I1121 14:46:58.505595    2080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-208006" podStartSLOduration=0.50556772 podStartE2EDuration="505.56772ms" podCreationTimestamp="2025-11-21 14:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:46:58.476902164 +0000 UTC m=+1.312390352" watchObservedRunningTime="2025-11-21 14:46:58.50556772 +0000 UTC m=+1.341055917"
	Nov 21 14:46:58 no-preload-208006 kubelet[2080]: I1121 14:46:58.505706    2080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-208006" podStartSLOduration=0.50570178 podStartE2EDuration="505.70178ms" podCreationTimestamp="2025-11-21 14:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:46:58.49872374 +0000 UTC m=+1.334211962" watchObservedRunningTime="2025-11-21 14:46:58.50570178 +0000 UTC m=+1.341189977"
	Nov 21 14:46:58 no-preload-208006 kubelet[2080]: I1121 14:46:58.575385    2080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-208006" podStartSLOduration=0.575366503 podStartE2EDuration="575.366503ms" podCreationTimestamp="2025-11-21 14:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:46:58.53272479 +0000 UTC m=+1.368212979" watchObservedRunningTime="2025-11-21 14:46:58.575366503 +0000 UTC m=+1.410854700"
	Nov 21 14:47:02 no-preload-208006 kubelet[2080]: I1121 14:47:02.337473    2080 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 21 14:47:02 no-preload-208006 kubelet[2080]: I1121 14:47:02.339571    2080 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 21 14:47:03 no-preload-208006 kubelet[2080]: I1121 14:47:03.591659    2080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60909558-fb73-4b08-a011-4d60cb8d5564-xtables-lock\") pod \"kindnet-kcbj5\" (UID: \"60909558-fb73-4b08-a011-4d60cb8d5564\") " pod="kube-system/kindnet-kcbj5"
	Nov 21 14:47:03 no-preload-208006 kubelet[2080]: I1121 14:47:03.592279    2080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60909558-fb73-4b08-a011-4d60cb8d5564-lib-modules\") pod \"kindnet-kcbj5\" (UID: \"60909558-fb73-4b08-a011-4d60cb8d5564\") " pod="kube-system/kindnet-kcbj5"
	Nov 21 14:47:03 no-preload-208006 kubelet[2080]: I1121 14:47:03.592700    2080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47e3faf6-8bcc-48ab-a34a-df6769a2ca1b-xtables-lock\") pod \"kube-proxy-9xgd7\" (UID: \"47e3faf6-8bcc-48ab-a34a-df6769a2ca1b\") " pod="kube-system/kube-proxy-9xgd7"
	Nov 21 14:47:03 no-preload-208006 kubelet[2080]: I1121 14:47:03.593001    2080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mttf5\" (UniqueName: \"kubernetes.io/projected/47e3faf6-8bcc-48ab-a34a-df6769a2ca1b-kube-api-access-mttf5\") pod \"kube-proxy-9xgd7\" (UID: \"47e3faf6-8bcc-48ab-a34a-df6769a2ca1b\") " pod="kube-system/kube-proxy-9xgd7"
	Nov 21 14:47:03 no-preload-208006 kubelet[2080]: I1121 14:47:03.597427    2080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47e3faf6-8bcc-48ab-a34a-df6769a2ca1b-lib-modules\") pod \"kube-proxy-9xgd7\" (UID: \"47e3faf6-8bcc-48ab-a34a-df6769a2ca1b\") " pod="kube-system/kube-proxy-9xgd7"
	Nov 21 14:47:03 no-preload-208006 kubelet[2080]: I1121 14:47:03.597668    2080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/60909558-fb73-4b08-a011-4d60cb8d5564-cni-cfg\") pod \"kindnet-kcbj5\" (UID: \"60909558-fb73-4b08-a011-4d60cb8d5564\") " pod="kube-system/kindnet-kcbj5"
	Nov 21 14:47:03 no-preload-208006 kubelet[2080]: I1121 14:47:03.597795    2080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs5vk\" (UniqueName: \"kubernetes.io/projected/60909558-fb73-4b08-a011-4d60cb8d5564-kube-api-access-cs5vk\") pod \"kindnet-kcbj5\" (UID: \"60909558-fb73-4b08-a011-4d60cb8d5564\") " pod="kube-system/kindnet-kcbj5"
	Nov 21 14:47:03 no-preload-208006 kubelet[2080]: I1121 14:47:03.597928    2080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/47e3faf6-8bcc-48ab-a34a-df6769a2ca1b-kube-proxy\") pod \"kube-proxy-9xgd7\" (UID: \"47e3faf6-8bcc-48ab-a34a-df6769a2ca1b\") " pod="kube-system/kube-proxy-9xgd7"
	Nov 21 14:47:03 no-preload-208006 kubelet[2080]: I1121 14:47:03.889961    2080 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 21 14:47:06 no-preload-208006 kubelet[2080]: I1121 14:47:06.044390    2080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9xgd7" podStartSLOduration=3.044370019 podStartE2EDuration="3.044370019s" podCreationTimestamp="2025-11-21 14:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:47:05.838086139 +0000 UTC m=+8.673574344" watchObservedRunningTime="2025-11-21 14:47:06.044370019 +0000 UTC m=+8.879858208"
	Nov 21 14:47:17 no-preload-208006 kubelet[2080]: I1121 14:47:17.665156    2080 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 21 14:47:17 no-preload-208006 kubelet[2080]: I1121 14:47:17.694913    2080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-kcbj5" podStartSLOduration=12.150175104 podStartE2EDuration="14.694896145s" podCreationTimestamp="2025-11-21 14:47:03 +0000 UTC" firstStartedPulling="2025-11-21 14:47:04.526643426 +0000 UTC m=+7.362131614" lastFinishedPulling="2025-11-21 14:47:07.071364466 +0000 UTC m=+9.906852655" observedRunningTime="2025-11-21 14:47:07.822139334 +0000 UTC m=+10.657627539" watchObservedRunningTime="2025-11-21 14:47:17.694896145 +0000 UTC m=+20.530384342"
	Nov 21 14:47:17 no-preload-208006 kubelet[2080]: I1121 14:47:17.750744    2080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6803e67a-8098-45df-8806-57553f15a42b-config-volume\") pod \"coredns-66bc5c9577-685tb\" (UID: \"6803e67a-8098-45df-8806-57553f15a42b\") " pod="kube-system/coredns-66bc5c9577-685tb"
	Nov 21 14:47:17 no-preload-208006 kubelet[2080]: I1121 14:47:17.750804    2080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8bb32e63-0669-499e-81f0-9e79f31c0762-tmp\") pod \"storage-provisioner\" (UID: \"8bb32e63-0669-499e-81f0-9e79f31c0762\") " pod="kube-system/storage-provisioner"
	Nov 21 14:47:17 no-preload-208006 kubelet[2080]: I1121 14:47:17.750835    2080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcjkc\" (UniqueName: \"kubernetes.io/projected/8bb32e63-0669-499e-81f0-9e79f31c0762-kube-api-access-tcjkc\") pod \"storage-provisioner\" (UID: \"8bb32e63-0669-499e-81f0-9e79f31c0762\") " pod="kube-system/storage-provisioner"
	Nov 21 14:47:17 no-preload-208006 kubelet[2080]: I1121 14:47:17.750873    2080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94bbc\" (UniqueName: \"kubernetes.io/projected/6803e67a-8098-45df-8806-57553f15a42b-kube-api-access-94bbc\") pod \"coredns-66bc5c9577-685tb\" (UID: \"6803e67a-8098-45df-8806-57553f15a42b\") " pod="kube-system/coredns-66bc5c9577-685tb"
	Nov 21 14:47:18 no-preload-208006 kubelet[2080]: I1121 14:47:18.896107    2080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-685tb" podStartSLOduration=15.896088731 podStartE2EDuration="15.896088731s" podCreationTimestamp="2025-11-21 14:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:47:18.848343083 +0000 UTC m=+21.683831288" watchObservedRunningTime="2025-11-21 14:47:18.896088731 +0000 UTC m=+21.731576920"
	Nov 21 14:47:20 no-preload-208006 kubelet[2080]: I1121 14:47:20.793215    2080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.793188847 podStartE2EDuration="15.793188847s" podCreationTimestamp="2025-11-21 14:47:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:47:18.935718419 +0000 UTC m=+21.771206624" watchObservedRunningTime="2025-11-21 14:47:20.793188847 +0000 UTC m=+23.628677052"
	Nov 21 14:47:20 no-preload-208006 kubelet[2080]: I1121 14:47:20.874024    2080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf45s\" (UniqueName: \"kubernetes.io/projected/0ea2f04f-bf79-4916-b7c0-6bb05a5c87d0-kube-api-access-bf45s\") pod \"busybox\" (UID: \"0ea2f04f-bf79-4916-b7c0-6bb05a5c87d0\") " pod="default/busybox"
	
	
	==> storage-provisioner [b011fda5b154db9e2a0ab88b094fd30c123607858f7144eb0187d89bc6c74ac9] <==
	I1121 14:47:18.293877       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1121 14:47:18.327251       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 14:47:18.329468       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1121 14:47:18.333461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:18.340406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:47:18.340633       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 14:47:18.343506       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-208006_f518bb2e-5096-4e91-877d-e8663ead43ae!
	I1121 14:47:18.345372       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"279faeae-517b-4016-875c-4c1bafb56dcc", APIVersion:"v1", ResourceVersion:"424", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-208006_f518bb2e-5096-4e91-877d-e8663ead43ae became leader
	W1121 14:47:18.358721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:18.370123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:47:18.444584       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-208006_f518bb2e-5096-4e91-877d-e8663ead43ae!
	W1121 14:47:20.373754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:20.380758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:22.384457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:22.389705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:24.394261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:24.402489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:26.406159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:26.411335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:28.415041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:28.422412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:30.425923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:30.430833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:32.438003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:32.443695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-208006 -n no-preload-208006
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-208006 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-208006
helpers_test.go:243: (dbg) docker inspect no-preload-208006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1e0c093eb824fed64933b797940c8238d911d3aec150dd510e66d5280b96bb39",
	        "Created": "2025-11-21T14:46:10.890663049Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2844438,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:46:11.006893988Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/1e0c093eb824fed64933b797940c8238d911d3aec150dd510e66d5280b96bb39/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1e0c093eb824fed64933b797940c8238d911d3aec150dd510e66d5280b96bb39/hostname",
	        "HostsPath": "/var/lib/docker/containers/1e0c093eb824fed64933b797940c8238d911d3aec150dd510e66d5280b96bb39/hosts",
	        "LogPath": "/var/lib/docker/containers/1e0c093eb824fed64933b797940c8238d911d3aec150dd510e66d5280b96bb39/1e0c093eb824fed64933b797940c8238d911d3aec150dd510e66d5280b96bb39-json.log",
	        "Name": "/no-preload-208006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-208006:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-208006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1e0c093eb824fed64933b797940c8238d911d3aec150dd510e66d5280b96bb39",
	                "LowerDir": "/var/lib/docker/overlay2/97e93d9d36c6404e1fdf7fc16f810513e13debddc7944f6c50d7d862f1c990f9-init/diff:/var/lib/docker/overlay2/789a4b9f9866e585907664b1eaf98d94438dbf699e0511f3ca5ba5ea682b005e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/97e93d9d36c6404e1fdf7fc16f810513e13debddc7944f6c50d7d862f1c990f9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/97e93d9d36c6404e1fdf7fc16f810513e13debddc7944f6c50d7d862f1c990f9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/97e93d9d36c6404e1fdf7fc16f810513e13debddc7944f6c50d7d862f1c990f9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-208006",
	                "Source": "/var/lib/docker/volumes/no-preload-208006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-208006",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-208006",
	                "name.minikube.sigs.k8s.io": "no-preload-208006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eae5e8bf8eea6950a367f46a305b79da8296b01966992ee5d4549339734788a5",
	            "SandboxKey": "/var/run/docker/netns/eae5e8bf8eea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36730"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36731"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36734"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36732"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36733"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-208006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:43:94:ae:63:fb",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d96be44a654a15cf0b4e08c5c304476e6d24f8af31c19cc13890d475bc3c99d2",
	                    "EndpointID": "863b271eb0de3e7731739eef15f46eb2be0b2a73b82a89773b2ab8882a5b8cbe",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-208006",
	                        "1e0c093eb824"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-208006 -n no-preload-208006
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-208006 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-208006 logs -n 25: (1.21492306s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ ssh     │ -p cilium-650772 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p cilium-650772 sudo crio config                                                                                                                                                                                                                   │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ delete  │ -p cilium-650772                                                                                                                                                                                                                                    │ cilium-650772            │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ start   │ -p cert-expiration-184410 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-184410   │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ force-systemd-env-041746 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-041746 │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ delete  │ -p force-systemd-env-041746                                                                                                                                                                                                                         │ force-systemd-env-041746 │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:43 UTC │
	│ start   │ -p cert-options-035007 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-035007      │ jenkins │ v1.37.0 │ 21 Nov 25 14:43 UTC │ 21 Nov 25 14:43 UTC │
	│ ssh     │ cert-options-035007 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-035007      │ jenkins │ v1.37.0 │ 21 Nov 25 14:43 UTC │ 21 Nov 25 14:43 UTC │
	│ ssh     │ -p cert-options-035007 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-035007      │ jenkins │ v1.37.0 │ 21 Nov 25 14:43 UTC │ 21 Nov 25 14:43 UTC │
	│ delete  │ -p cert-options-035007                                                                                                                                                                                                                              │ cert-options-035007      │ jenkins │ v1.37.0 │ 21 Nov 25 14:43 UTC │ 21 Nov 25 14:43 UTC │
	│ start   │ -p old-k8s-version-092258 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:43 UTC │ 21 Nov 25 14:44 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-092258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:44 UTC │ 21 Nov 25 14:44 UTC │
	│ stop    │ -p old-k8s-version-092258 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:44 UTC │ 21 Nov 25 14:45 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-092258 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:45 UTC │ 21 Nov 25 14:45 UTC │
	│ start   │ -p old-k8s-version-092258 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:45 UTC │ 21 Nov 25 14:45 UTC │
	│ start   │ -p cert-expiration-184410 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-184410   │ jenkins │ v1.37.0 │ 21 Nov 25 14:45 UTC │ 21 Nov 25 14:46 UTC │
	│ delete  │ -p cert-expiration-184410                                                                                                                                                                                                                           │ cert-expiration-184410   │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │ 21 Nov 25 14:46 UTC │
	│ image   │ old-k8s-version-092258 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │ 21 Nov 25 14:46 UTC │
	│ pause   │ -p old-k8s-version-092258 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │ 21 Nov 25 14:46 UTC │
	│ start   │ -p no-preload-208006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-208006        │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │ 21 Nov 25 14:47 UTC │
	│ unpause │ -p old-k8s-version-092258 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │ 21 Nov 25 14:46 UTC │
	│ delete  │ -p old-k8s-version-092258                                                                                                                                                                                                                           │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │ 21 Nov 25 14:46 UTC │
	│ delete  │ -p old-k8s-version-092258                                                                                                                                                                                                                           │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │ 21 Nov 25 14:46 UTC │
	│ start   │ -p embed-certs-695324 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-695324       │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:46:16
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:46:16.326993 2845792 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:46:16.327104 2845792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:46:16.327116 2845792 out.go:374] Setting ErrFile to fd 2...
	I1121 14:46:16.327122 2845792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:46:16.327480 2845792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-2633933/.minikube/bin
	I1121 14:46:16.327951 2845792 out.go:368] Setting JSON to false
	I1121 14:46:16.328831 2845792 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":70125,"bootTime":1763666252,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1121 14:46:16.328919 2845792 start.go:143] virtualization:  
	I1121 14:46:16.331788 2845792 out.go:179] * [embed-certs-695324] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 14:46:16.335181 2845792 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:46:16.335232 2845792 notify.go:221] Checking for updates...
	I1121 14:46:16.340775 2845792 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:46:16.343361 2845792 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-2633933/kubeconfig
	I1121 14:46:16.345968 2845792 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-2633933/.minikube
	I1121 14:46:16.349257 2845792 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 14:46:16.351997 2845792 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:46:16.355115 2845792 config.go:182] Loaded profile config "no-preload-208006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:46:16.355263 2845792 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:46:16.400131 2845792 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 14:46:16.400273 2845792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:46:16.491502 2845792 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:47 OomKillDisable:true NGoroutines:60 SystemTime:2025-11-21 14:46:16.482277837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:46:16.491609 2845792 docker.go:319] overlay module found
	I1121 14:46:16.494579 2845792 out.go:179] * Using the docker driver based on user configuration
	I1121 14:46:16.497276 2845792 start.go:309] selected driver: docker
	I1121 14:46:16.497309 2845792 start.go:930] validating driver "docker" against <nil>
	I1121 14:46:16.497328 2845792 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:46:16.498032 2845792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:46:16.564481 2845792 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:47 OomKillDisable:true NGoroutines:60 SystemTime:2025-11-21 14:46:16.554548265 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:46:16.564662 2845792 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 14:46:16.564884 2845792 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:46:16.567782 2845792 out.go:179] * Using Docker driver with root privileges
	I1121 14:46:16.570600 2845792 cni.go:84] Creating CNI manager for ""
	I1121 14:46:16.570679 2845792 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:46:16.570696 2845792 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 14:46:16.570780 2845792 start.go:353] cluster config:
	{Name:embed-certs-695324 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-695324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:46:16.573739 2845792 out.go:179] * Starting "embed-certs-695324" primary control-plane node in "embed-certs-695324" cluster
	I1121 14:46:16.576483 2845792 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1121 14:46:16.579327 2845792 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:46:16.582077 2845792 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:46:16.582134 2845792 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1121 14:46:16.582148 2845792 cache.go:65] Caching tarball of preloaded images
	I1121 14:46:16.582231 2845792 preload.go:238] Found /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1121 14:46:16.582250 2845792 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1121 14:46:16.582363 2845792 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/config.json ...
	I1121 14:46:16.582386 2845792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/config.json: {Name:mke14d63735a3a2e3fa6310a5ff7f022bfb6b94e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:46:16.582540 2845792 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:46:16.608364 2845792 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:46:16.608388 2845792 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:46:16.608401 2845792 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:46:16.608425 2845792 start.go:360] acquireMachinesLock for embed-certs-695324: {Name:mkc2e7d115c6f1cd0f9b5fd1683b9702ddf4b916 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:46:16.608531 2845792 start.go:364] duration metric: took 86.274µs to acquireMachinesLock for "embed-certs-695324"
	I1121 14:46:16.608564 2845792 start.go:93] Provisioning new machine with config: &{Name:embed-certs-695324 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-695324 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:46:16.608656 2845792 start.go:125] createHost starting for "" (driver="docker")
	I1121 14:46:15.804828 2843875 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-208006
	
	I1121 14:46:15.804858 2843875 ubuntu.go:182] provisioning hostname "no-preload-208006"
	I1121 14:46:15.804938 2843875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:46:15.831238 2843875 main.go:143] libmachine: Using SSH client type: native
	I1121 14:46:15.831569 2843875 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 36730 <nil> <nil>}
	I1121 14:46:15.831585 2843875 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-208006 && echo "no-preload-208006" | sudo tee /etc/hostname
	I1121 14:46:15.997326 2843875 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-208006
	
	I1121 14:46:15.997403 2843875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:46:16.032482 2843875 main.go:143] libmachine: Using SSH client type: native
	I1121 14:46:16.032819 2843875 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 36730 <nil> <nil>}
	I1121 14:46:16.032844 2843875 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-208006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-208006/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-208006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:46:16.193875 2843875 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:46:16.193917 2843875 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-2633933/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-2633933/.minikube}
	I1121 14:46:16.193948 2843875 ubuntu.go:190] setting up certificates
	I1121 14:46:16.193958 2843875 provision.go:84] configureAuth start
	I1121 14:46:16.194023 2843875 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-208006
	I1121 14:46:16.214793 2843875 provision.go:143] copyHostCerts
	I1121 14:46:16.214865 2843875 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-2633933/.minikube/key.pem, removing ...
	I1121 14:46:16.214875 2843875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-2633933/.minikube/key.pem
	I1121 14:46:16.214951 2843875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-2633933/.minikube/key.pem (1679 bytes)
	I1121 14:46:16.215051 2843875 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.pem, removing ...
	I1121 14:46:16.215056 2843875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.pem
	I1121 14:46:16.215081 2843875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.pem (1082 bytes)
	I1121 14:46:16.215141 2843875 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-2633933/.minikube/cert.pem, removing ...
	I1121 14:46:16.215145 2843875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-2633933/.minikube/cert.pem
	I1121 14:46:16.215169 2843875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-2633933/.minikube/cert.pem (1123 bytes)
	I1121 14:46:16.215224 2843875 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca-key.pem org=jenkins.no-preload-208006 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-208006]
	I1121 14:46:16.644766 2843875 provision.go:177] copyRemoteCerts
	I1121 14:46:16.646436 2843875 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:46:16.646519 2843875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:46:16.666316 2843875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36730 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/no-preload-208006/id_rsa Username:docker}
	I1121 14:46:16.798116 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1121 14:46:16.819276 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1121 14:46:16.840174 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 14:46:16.867645 2843875 provision.go:87] duration metric: took 673.658795ms to configureAuth
	I1121 14:46:16.867690 2843875 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:46:16.867884 2843875 config.go:182] Loaded profile config "no-preload-208006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:46:16.867893 2843875 machine.go:97] duration metric: took 4.275245704s to provisionDockerMachine
	I1121 14:46:16.867900 2843875 client.go:176] duration metric: took 7.089985864s to LocalClient.Create
	I1121 14:46:16.867920 2843875 start.go:167] duration metric: took 7.09009768s to libmachine.API.Create "no-preload-208006"
	I1121 14:46:16.867929 2843875 start.go:293] postStartSetup for "no-preload-208006" (driver="docker")
	I1121 14:46:16.867952 2843875 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:46:16.868009 2843875 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:46:16.868050 2843875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:46:16.886819 2843875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36730 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/no-preload-208006/id_rsa Username:docker}
	I1121 14:46:17.003877 2843875 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:46:17.008553 2843875 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:46:17.008583 2843875 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:46:17.008608 2843875 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-2633933/.minikube/addons for local assets ...
	I1121 14:46:17.008680 2843875 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-2633933/.minikube/files for local assets ...
	I1121 14:46:17.008759 2843875 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem -> 26357852.pem in /etc/ssl/certs
	I1121 14:46:17.008857 2843875 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:46:17.021454 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem --> /etc/ssl/certs/26357852.pem (1708 bytes)
	I1121 14:46:17.046785 2843875 start.go:296] duration metric: took 178.84026ms for postStartSetup
	I1121 14:46:17.047464 2843875 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-208006
	I1121 14:46:17.071160 2843875 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/config.json ...
	I1121 14:46:17.071388 2843875 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:46:17.071429 2843875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:46:17.096355 2843875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36730 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/no-preload-208006/id_rsa Username:docker}
	I1121 14:46:17.229217 2843875 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:46:17.237671 2843875 start.go:128] duration metric: took 7.467246221s to createHost
	I1121 14:46:17.237737 2843875 start.go:83] releasing machines lock for "no-preload-208006", held for 7.467415857s
	I1121 14:46:17.237833 2843875 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-208006
	I1121 14:46:17.267402 2843875 ssh_runner.go:195] Run: cat /version.json
	I1121 14:46:17.267454 2843875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:46:17.267686 2843875 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:46:17.267757 2843875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:46:17.285868 2843875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36730 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/no-preload-208006/id_rsa Username:docker}
	I1121 14:46:17.294395 2843875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36730 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/no-preload-208006/id_rsa Username:docker}
	I1121 14:46:17.422212 2843875 ssh_runner.go:195] Run: systemctl --version
	I1121 14:46:17.534821 2843875 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:46:17.547177 2843875 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:46:17.547248 2843875 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:46:17.586563 2843875 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1121 14:46:17.586583 2843875 start.go:496] detecting cgroup driver to use...
	I1121 14:46:17.586614 2843875 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 14:46:17.586665 2843875 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1121 14:46:17.606535 2843875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1121 14:46:17.621163 2843875 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:46:17.621284 2843875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:46:17.641636 2843875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:46:17.660887 2843875 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:46:17.797691 2843875 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:46:17.967227 2843875 docker.go:234] disabling docker service ...
	I1121 14:46:17.967298 2843875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:46:17.993890 2843875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:46:18.010760 2843875 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:46:18.165382 2843875 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:46:18.315505 2843875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:46:18.330184 2843875 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:46:18.345512 2843875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1121 14:46:18.354334 2843875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1121 14:46:18.363523 2843875 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1121 14:46:18.363592 2843875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1121 14:46:18.371934 2843875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:46:18.380389 2843875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1121 14:46:18.388714 2843875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:46:18.397100 2843875 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:46:18.404900 2843875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1121 14:46:18.413357 2843875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1121 14:46:18.421810 2843875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1121 14:46:18.430502 2843875 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:46:18.438080 2843875 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:46:18.445378 2843875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:46:18.587718 2843875 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1121 14:46:18.693876 2843875 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1121 14:46:18.693948 2843875 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1121 14:46:18.698190 2843875 start.go:564] Will wait 60s for crictl version
	I1121 14:46:18.698256 2843875 ssh_runner.go:195] Run: which crictl
	I1121 14:46:18.706284 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:46:18.769772 2843875 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1121 14:46:18.769844 2843875 ssh_runner.go:195] Run: containerd --version
	I1121 14:46:18.791958 2843875 ssh_runner.go:195] Run: containerd --version
	I1121 14:46:18.826059 2843875 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1121 14:46:18.829243 2843875 cli_runner.go:164] Run: docker network inspect no-preload-208006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:46:18.848181 2843875 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1121 14:46:18.852277 2843875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:46:18.862918 2843875 kubeadm.go:884] updating cluster {Name:no-preload-208006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-208006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:46:18.863033 2843875 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:46:18.863082 2843875 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:46:18.891641 2843875 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1121 14:46:18.891668 2843875 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1121 14:46:18.891704 2843875 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:46:18.891913 2843875 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:46:18.892013 2843875 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:46:18.892105 2843875 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:46:18.892192 2843875 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:46:18.892275 2843875 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1121 14:46:18.892356 2843875 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1121 14:46:18.892445 2843875 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:46:18.895334 2843875 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1121 14:46:18.895632 2843875 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:46:18.895816 2843875 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:46:18.895973 2843875 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:46:18.896120 2843875 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:46:18.896434 2843875 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:46:18.896690 2843875 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1121 14:46:18.896914 2843875 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:46:19.153519 2843875 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc"
	I1121 14:46:19.153641 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:46:19.153810 2843875 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.4-0" and sha "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e"
	I1121 14:46:19.153879 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.4-0
	I1121 14:46:19.154772 2843875 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196"
	I1121 14:46:19.154864 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:46:19.158179 2843875 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9"
	I1121 14:46:19.158294 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:46:19.165306 2843875 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1121 14:46:19.165424 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1121 14:46:19.193299 2843875 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0"
	I1121 14:46:19.193422 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:46:19.197269 2843875 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a"
	I1121 14:46:19.197384 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:46:19.272757 2843875 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1121 14:46:19.272850 2843875 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1121 14:46:19.272932 2843875 ssh_runner.go:195] Run: which crictl
	I1121 14:46:19.273046 2843875 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1121 14:46:19.273085 2843875 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:46:19.273142 2843875 ssh_runner.go:195] Run: which crictl
	I1121 14:46:19.273247 2843875 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1121 14:46:19.273294 2843875 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:46:19.273337 2843875 ssh_runner.go:195] Run: which crictl
	I1121 14:46:19.274544 2843875 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1121 14:46:19.274650 2843875 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:46:19.274720 2843875 ssh_runner.go:195] Run: which crictl
	I1121 14:46:19.308807 2843875 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1121 14:46:19.308900 2843875 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:46:19.308979 2843875 ssh_runner.go:195] Run: which crictl
	I1121 14:46:19.320250 2843875 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1121 14:46:19.320489 2843875 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:46:19.320545 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:46:19.320581 2843875 ssh_runner.go:195] Run: which crictl
	I1121 14:46:19.320489 2843875 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1121 14:46:19.320621 2843875 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1121 14:46:19.320649 2843875 ssh_runner.go:195] Run: which crictl
	I1121 14:46:19.320449 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:46:19.320530 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:46:19.320376 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:46:19.324700 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:46:19.455679 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:46:19.455688 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:46:19.461927 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:46:19.461961 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:46:19.461999 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:46:19.463309 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:46:16.611879 2845792 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1121 14:46:16.612093 2845792 start.go:159] libmachine.API.Create for "embed-certs-695324" (driver="docker")
	I1121 14:46:16.612133 2845792 client.go:173] LocalClient.Create starting
	I1121 14:46:16.612196 2845792 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem
	I1121 14:46:16.612236 2845792 main.go:143] libmachine: Decoding PEM data...
	I1121 14:46:16.612253 2845792 main.go:143] libmachine: Parsing certificate...
	I1121 14:46:16.612307 2845792 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem
	I1121 14:46:16.612333 2845792 main.go:143] libmachine: Decoding PEM data...
	I1121 14:46:16.612344 2845792 main.go:143] libmachine: Parsing certificate...
	I1121 14:46:16.612716 2845792 cli_runner.go:164] Run: docker network inspect embed-certs-695324 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 14:46:16.646448 2845792 cli_runner.go:211] docker network inspect embed-certs-695324 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 14:46:16.646514 2845792 network_create.go:284] running [docker network inspect embed-certs-695324] to gather additional debugging logs...
	I1121 14:46:16.646531 2845792 cli_runner.go:164] Run: docker network inspect embed-certs-695324
	W1121 14:46:16.660808 2845792 cli_runner.go:211] docker network inspect embed-certs-695324 returned with exit code 1
	I1121 14:46:16.660841 2845792 network_create.go:287] error running [docker network inspect embed-certs-695324]: docker network inspect embed-certs-695324: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-695324 not found
	I1121 14:46:16.660857 2845792 network_create.go:289] output of [docker network inspect embed-certs-695324]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-695324 not found
	
	** /stderr **
	I1121 14:46:16.660948 2845792 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:46:16.679850 2845792 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c13a3bee40ff IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:9f:8e:c6:2a:d6} reservation:<nil>}
	I1121 14:46:16.680123 2845792 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1859e8fd5584 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:c6:00:f6:5b:96} reservation:<nil>}
	I1121 14:46:16.680363 2845792 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-44a9b6062c4d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:b5:31:a5:3d:f0} reservation:<nil>}
	I1121 14:46:16.680806 2845792 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019654d0}
	I1121 14:46:16.680824 2845792 network_create.go:124] attempt to create docker network embed-certs-695324 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1121 14:46:16.680877 2845792 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-695324 embed-certs-695324
	I1121 14:46:16.764398 2845792 network_create.go:108] docker network embed-certs-695324 192.168.76.0/24 created
	I1121 14:46:16.764426 2845792 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-695324" container
	I1121 14:46:16.764513 2845792 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 14:46:16.781184 2845792 cli_runner.go:164] Run: docker volume create embed-certs-695324 --label name.minikube.sigs.k8s.io=embed-certs-695324 --label created_by.minikube.sigs.k8s.io=true
	I1121 14:46:16.805680 2845792 oci.go:103] Successfully created a docker volume embed-certs-695324
	I1121 14:46:16.805771 2845792 cli_runner.go:164] Run: docker run --rm --name embed-certs-695324-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-695324 --entrypoint /usr/bin/test -v embed-certs-695324:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1121 14:46:17.385626 2845792 oci.go:107] Successfully prepared a docker volume embed-certs-695324
	I1121 14:46:17.385698 2845792 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:46:17.385708 2845792 kic.go:194] Starting extracting preloaded images to volume ...
	I1121 14:46:17.385773 2845792 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-695324:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1121 14:46:19.516354 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:46:19.674508 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:46:19.674559 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:46:19.674621 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:46:19.674643 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:46:19.674684 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:46:19.674718 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:46:19.727578 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:46:19.908931 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:46:19.909074 2843875 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1121 14:46:19.909168 2843875 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:46:19.909244 2843875 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1121 14:46:19.909322 2843875 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:46:19.909390 2843875 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1121 14:46:19.909460 2843875 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:46:19.909537 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:46:19.909601 2843875 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1121 14:46:19.909672 2843875 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:46:19.942158 2843875 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1121 14:46:19.942297 2843875 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:46:19.999033 2843875 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1121 14:46:19.999238 2843875 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1121 14:46:19.999261 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1121 14:46:19.999063 2843875 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1121 14:46:19.999327 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1121 14:46:19.999090 2843875 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1121 14:46:19.999371 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1121 14:46:19.999108 2843875 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1121 14:46:19.999426 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1121 14:46:19.999492 2843875 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1121 14:46:19.999586 2843875 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:46:19.999649 2843875 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1121 14:46:19.999667 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1121 14:46:19.999727 2843875 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1121 14:46:20.110939 2843875 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1121 14:46:20.111022 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1121 14:46:20.111109 2843875 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1121 14:46:20.111146 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	W1121 14:46:20.215892 2843875 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1121 14:46:20.216111 2843875 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1121 14:46:20.216208 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:46:20.342555 2843875 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1121 14:46:20.342678 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1121 14:46:20.435044 2843875 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1121 14:46:20.435101 2843875 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:46:20.435198 2843875 ssh_runner.go:195] Run: which crictl
	I1121 14:46:20.854172 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:46:20.854278 2843875 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1121 14:46:20.854316 2843875 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:46:20.854367 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:46:24.097961 2843875 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1: (3.243552672s)
	I1121 14:46:24.097972 2843875 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.243728495s)
	I1121 14:46:24.097984 2843875 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1121 14:46:24.098001 2843875 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:46:24.098044 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:46:24.098109 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:46:23.085129 2845792 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-695324:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (5.699321089s)
	I1121 14:46:23.085159 2845792 kic.go:203] duration metric: took 5.699447592s to extract preloaded images to volume ...
	W1121 14:46:23.085298 2845792 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1121 14:46:23.085403 2845792 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 14:46:23.176190 2845792 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-695324 --name embed-certs-695324 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-695324 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-695324 --network embed-certs-695324 --ip 192.168.76.2 --volume embed-certs-695324:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1121 14:46:23.538701 2845792 cli_runner.go:164] Run: docker container inspect embed-certs-695324 --format={{.State.Running}}
	I1121 14:46:23.562031 2845792 cli_runner.go:164] Run: docker container inspect embed-certs-695324 --format={{.State.Status}}
	I1121 14:46:23.584821 2845792 cli_runner.go:164] Run: docker exec embed-certs-695324 stat /var/lib/dpkg/alternatives/iptables
	I1121 14:46:23.645344 2845792 oci.go:144] the created container "embed-certs-695324" has a running status.
	I1121 14:46:23.645369 2845792 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/embed-certs-695324/id_rsa...
	I1121 14:46:25.298742 2845792 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/embed-certs-695324/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 14:46:25.322579 2845792 cli_runner.go:164] Run: docker container inspect embed-certs-695324 --format={{.State.Status}}
	I1121 14:46:25.340758 2845792 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 14:46:25.340783 2845792 kic_runner.go:114] Args: [docker exec --privileged embed-certs-695324 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 14:46:25.422837 2845792 cli_runner.go:164] Run: docker container inspect embed-certs-695324 --format={{.State.Status}}
	I1121 14:46:25.448579 2845792 machine.go:94] provisionDockerMachine start ...
	I1121 14:46:25.448698 2845792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-695324
	I1121 14:46:25.472700 2845792 main.go:143] libmachine: Using SSH client type: native
	I1121 14:46:25.473059 2845792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 36735 <nil> <nil>}
	I1121 14:46:25.473077 2845792 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:46:25.473757 2845792 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1121 14:46:25.954762 2843875 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1: (1.856695361s)
	I1121 14:46:25.954790 2843875 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1121 14:46:25.954808 2843875 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:46:25.954854 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:46:25.954919 2843875 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.856767293s)
	I1121 14:46:25.954958 2843875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:46:27.000276 2843875 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.045289145s)
	I1121 14:46:27.000291 2843875 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (1.045413088s)
	I1121 14:46:27.000310 2843875 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1121 14:46:27.000325 2843875 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1121 14:46:27.000330 2843875 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:46:27.000383 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:46:27.000413 2843875 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:46:27.894033 2843875 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1121 14:46:27.894075 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1121 14:46:27.894124 2843875 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1121 14:46:27.894148 2843875 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:46:27.894193 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:46:29.189125 2843875 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.29490269s)
	I1121 14:46:29.189154 2843875 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1121 14:46:29.189174 2843875 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:46:29.189223 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:46:28.617194 2845792 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-695324
	
	I1121 14:46:28.617261 2845792 ubuntu.go:182] provisioning hostname "embed-certs-695324"
	I1121 14:46:28.617363 2845792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-695324
	I1121 14:46:28.638734 2845792 main.go:143] libmachine: Using SSH client type: native
	I1121 14:46:28.639047 2845792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 36735 <nil> <nil>}
	I1121 14:46:28.639058 2845792 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-695324 && echo "embed-certs-695324" | sudo tee /etc/hostname
	I1121 14:46:28.806918 2845792 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-695324
	
	I1121 14:46:28.807111 2845792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-695324
	I1121 14:46:28.842793 2845792 main.go:143] libmachine: Using SSH client type: native
	I1121 14:46:28.843096 2845792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 36735 <nil> <nil>}
	I1121 14:46:28.843112 2845792 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-695324' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-695324/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-695324' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:46:28.989484 2845792 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:46:28.989562 2845792 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-2633933/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-2633933/.minikube}
	I1121 14:46:28.989598 2845792 ubuntu.go:190] setting up certificates
	I1121 14:46:28.989645 2845792 provision.go:84] configureAuth start
	I1121 14:46:28.989741 2845792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-695324
	I1121 14:46:29.022787 2845792 provision.go:143] copyHostCerts
	I1121 14:46:29.022868 2845792 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.pem, removing ...
	I1121 14:46:29.022877 2845792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.pem
	I1121 14:46:29.022948 2845792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.pem (1082 bytes)
	I1121 14:46:29.023034 2845792 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-2633933/.minikube/cert.pem, removing ...
	I1121 14:46:29.023039 2845792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-2633933/.minikube/cert.pem
	I1121 14:46:29.023069 2845792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-2633933/.minikube/cert.pem (1123 bytes)
	I1121 14:46:29.023120 2845792 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-2633933/.minikube/key.pem, removing ...
	I1121 14:46:29.023125 2845792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-2633933/.minikube/key.pem
	I1121 14:46:29.023148 2845792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-2633933/.minikube/key.pem (1679 bytes)
	I1121 14:46:29.023191 2845792 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca-key.pem org=jenkins.embed-certs-695324 san=[127.0.0.1 192.168.76.2 embed-certs-695324 localhost minikube]
	I1121 14:46:29.570345 2845792 provision.go:177] copyRemoteCerts
	I1121 14:46:29.570454 2845792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:46:29.570538 2845792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-695324
	I1121 14:46:29.598483 2845792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36735 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/embed-certs-695324/id_rsa Username:docker}
	I1121 14:46:29.701473 2845792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1121 14:46:29.722045 2845792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1121 14:46:29.741862 2845792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 14:46:29.762287 2845792 provision.go:87] duration metric: took 772.611896ms to configureAuth
	I1121 14:46:29.762355 2845792 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:46:29.762569 2845792 config.go:182] Loaded profile config "embed-certs-695324": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:46:29.762605 2845792 machine.go:97] duration metric: took 4.314000911s to provisionDockerMachine
	I1121 14:46:29.762631 2845792 client.go:176] duration metric: took 13.150486471s to LocalClient.Create
	I1121 14:46:29.762734 2845792 start.go:167] duration metric: took 13.150641412s to libmachine.API.Create "embed-certs-695324"
	I1121 14:46:29.762766 2845792 start.go:293] postStartSetup for "embed-certs-695324" (driver="docker")
	I1121 14:46:29.762794 2845792 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:46:29.762882 2845792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:46:29.762944 2845792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-695324
	I1121 14:46:29.783202 2845792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36735 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/embed-certs-695324/id_rsa Username:docker}
	I1121 14:46:29.887259 2845792 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:46:29.891015 2845792 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:46:29.891045 2845792 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:46:29.891055 2845792 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-2633933/.minikube/addons for local assets ...
	I1121 14:46:29.891110 2845792 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-2633933/.minikube/files for local assets ...
	I1121 14:46:29.891193 2845792 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem -> 26357852.pem in /etc/ssl/certs
	I1121 14:46:29.891296 2845792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:46:29.899723 2845792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem --> /etc/ssl/certs/26357852.pem (1708 bytes)
	I1121 14:46:29.920828 2845792 start.go:296] duration metric: took 158.030476ms for postStartSetup
	I1121 14:46:29.921336 2845792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-695324
	I1121 14:46:29.939087 2845792 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/config.json ...
	I1121 14:46:29.939375 2845792 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:46:29.939417 2845792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-695324
	I1121 14:46:29.959478 2845792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36735 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/embed-certs-695324/id_rsa Username:docker}
	I1121 14:46:30.067250 2845792 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:46:30.073376 2845792 start.go:128] duration metric: took 13.464704227s to createHost
	I1121 14:46:30.073407 2845792 start.go:83] releasing machines lock for "embed-certs-695324", held for 13.464860202s
	I1121 14:46:30.073489 2845792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-695324
	I1121 14:46:30.091548 2845792 ssh_runner.go:195] Run: cat /version.json
	I1121 14:46:30.091564 2845792 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:46:30.091604 2845792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-695324
	I1121 14:46:30.091643 2845792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-695324
	I1121 14:46:30.129854 2845792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36735 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/embed-certs-695324/id_rsa Username:docker}
	I1121 14:46:30.137554 2845792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36735 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/embed-certs-695324/id_rsa Username:docker}
	I1121 14:46:30.245801 2845792 ssh_runner.go:195] Run: systemctl --version
	I1121 14:46:30.350964 2845792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:46:30.356593 2845792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:46:30.356741 2845792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:46:30.394493 2845792 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1121 14:46:30.394533 2845792 start.go:496] detecting cgroup driver to use...
	I1121 14:46:30.394566 2845792 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 14:46:30.394628 2845792 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1121 14:46:30.410709 2845792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1121 14:46:30.425437 2845792 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:46:30.425513 2845792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:46:30.442999 2845792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:46:30.462984 2845792 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:46:30.635865 2845792 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:46:30.776089 2845792 docker.go:234] disabling docker service ...
	I1121 14:46:30.776163 2845792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:46:30.801984 2845792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:46:30.816781 2845792 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:46:30.966476 2845792 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:46:31.101428 2845792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:46:31.116189 2845792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:46:31.134844 2845792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1121 14:46:31.146050 2845792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1121 14:46:31.161494 2845792 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1121 14:46:31.161608 2845792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1121 14:46:31.172353 2845792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:46:31.182106 2845792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1121 14:46:31.194710 2845792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:46:31.203930 2845792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:46:31.213693 2845792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1121 14:46:31.223034 2845792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1121 14:46:31.231804 2845792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1121 14:46:31.241520 2845792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:46:31.248849 2845792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:46:31.256106 2845792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:46:31.411790 2845792 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1121 14:46:31.619638 2845792 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1121 14:46:31.619759 2845792 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1121 14:46:31.624454 2845792 start.go:564] Will wait 60s for crictl version
	I1121 14:46:31.624569 2845792 ssh_runner.go:195] Run: which crictl
	I1121 14:46:31.634889 2845792 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:46:31.685642 2845792 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1121 14:46:31.685764 2845792 ssh_runner.go:195] Run: containerd --version
	I1121 14:46:31.706549 2845792 ssh_runner.go:195] Run: containerd --version
	I1121 14:46:31.733431 2845792 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1121 14:46:31.736500 2845792 cli_runner.go:164] Run: docker network inspect embed-certs-695324 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:46:31.760669 2845792 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1121 14:46:31.765115 2845792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:46:31.775687 2845792 kubeadm.go:884] updating cluster {Name:embed-certs-695324 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-695324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:46:31.775803 2845792 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:46:31.775861 2845792 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:46:31.810005 2845792 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:46:31.810025 2845792 containerd.go:534] Images already preloaded, skipping extraction
	I1121 14:46:31.810084 2845792 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:46:31.839637 2845792 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:46:31.839708 2845792 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:46:31.839731 2845792 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1121 14:46:31.839867 2845792 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-695324 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-695324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:46:31.839969 2845792 ssh_runner.go:195] Run: sudo crictl info
	I1121 14:46:31.867506 2845792 cni.go:84] Creating CNI manager for ""
	I1121 14:46:31.867526 2845792 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:46:31.867544 2845792 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:46:31.867566 2845792 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-695324 NodeName:embed-certs-695324 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:46:31.867681 2845792 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-695324"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:46:31.867745 2845792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:46:31.876564 2845792 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:46:31.876642 2845792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:46:31.884832 2845792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1121 14:46:31.898777 2845792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:46:31.913522 2845792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1121 14:46:31.927706 2845792 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:46:31.931858 2845792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:46:31.942031 2845792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:46:32.106493 2845792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:46:32.125935 2845792 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324 for IP: 192.168.76.2
	I1121 14:46:32.126008 2845792 certs.go:195] generating shared ca certs ...
	I1121 14:46:32.126042 2845792 certs.go:227] acquiring lock for ca certs: {Name:mk0a1b8efa9f1d453751b4f7afafeea16d7243a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:46:32.126242 2845792 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.key
	I1121 14:46:32.126329 2845792 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/proxy-client-ca.key
	I1121 14:46:32.126379 2845792 certs.go:257] generating profile certs ...
	I1121 14:46:32.126486 2845792 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/client.key
	I1121 14:46:32.126520 2845792 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/client.crt with IP's: []
	I1121 14:46:32.588460 2845792 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/client.crt ...
	I1121 14:46:32.588534 2845792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/client.crt: {Name:mk8fad0fe6ddd8ca3ea8e59602e9b95d3e1e2e66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:46:32.588753 2845792 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/client.key ...
	I1121 14:46:32.588794 2845792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/client.key: {Name:mk9fa67e0e4f3c9d0d7f7d4a93fdd091a5ebe542 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:46:32.588930 2845792 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/apiserver.key.f2f4e569
	I1121 14:46:32.588973 2845792 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/apiserver.crt.f2f4e569 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1121 14:46:33.015392 2845792 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/apiserver.crt.f2f4e569 ...
	I1121 14:46:33.015502 2845792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/apiserver.crt.f2f4e569: {Name:mke85b18f77dc07d9b05f4b95b9d2e9b941dbefa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:46:33.015759 2845792 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/apiserver.key.f2f4e569 ...
	I1121 14:46:33.015796 2845792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/apiserver.key.f2f4e569: {Name:mk076c6ec186d21de6b0c211f54328fe2ad889e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:46:33.015956 2845792 certs.go:382] copying /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/apiserver.crt.f2f4e569 -> /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/apiserver.crt
	I1121 14:46:33.016083 2845792 certs.go:386] copying /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/apiserver.key.f2f4e569 -> /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/apiserver.key
	I1121 14:46:33.016175 2845792 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/proxy-client.key
	I1121 14:46:33.016228 2845792 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/proxy-client.crt with IP's: []
	I1121 14:46:33.213450 2845792 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/proxy-client.crt ...
	I1121 14:46:33.213525 2845792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/proxy-client.crt: {Name:mkd7670327930017620cb6fe39b50c2de2e744ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:46:33.213761 2845792 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/proxy-client.key ...
	I1121 14:46:33.213799 2845792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/proxy-client.key: {Name:mk6fdeb8d841ba53f5c563a5da2a1d7f25fa31d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:46:33.214053 2845792 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/2635785.pem (1338 bytes)
	W1121 14:46:33.214119 2845792 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/2635785_empty.pem, impossibly tiny 0 bytes
	I1121 14:46:33.214148 2845792 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:46:33.214207 2845792 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem (1082 bytes)
	I1121 14:46:33.214263 2845792 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:46:33.214310 2845792 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/key.pem (1679 bytes)
	I1121 14:46:33.214399 2845792 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem (1708 bytes)
	I1121 14:46:33.215065 2845792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:46:33.234800 2845792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 14:46:33.254649 2845792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:46:33.274323 2845792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:46:33.295607 2845792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1121 14:46:33.317422 2845792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:46:33.339400 2845792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:46:33.363593 2845792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/embed-certs-695324/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1121 14:46:33.381006 2845792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/2635785.pem --> /usr/share/ca-certificates/2635785.pem (1338 bytes)
	I1121 14:46:33.398365 2845792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem --> /usr/share/ca-certificates/26357852.pem (1708 bytes)
	I1121 14:46:33.418401 2845792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:46:33.436339 2845792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:46:33.449672 2845792 ssh_runner.go:195] Run: openssl version
	I1121 14:46:33.456388 2845792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:46:33.464752 2845792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:46:33.468870 2845792 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:46:33.468951 2845792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:46:33.510910 2845792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:46:33.519392 2845792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2635785.pem && ln -fs /usr/share/ca-certificates/2635785.pem /etc/ssl/certs/2635785.pem"
	I1121 14:46:33.527736 2845792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2635785.pem
	I1121 14:46:33.532080 2845792 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:03 /usr/share/ca-certificates/2635785.pem
	I1121 14:46:33.532151 2845792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2635785.pem
	I1121 14:46:33.574734 2845792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2635785.pem /etc/ssl/certs/51391683.0"
	I1121 14:46:33.583500 2845792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26357852.pem && ln -fs /usr/share/ca-certificates/26357852.pem /etc/ssl/certs/26357852.pem"
	I1121 14:46:33.592193 2845792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26357852.pem
	I1121 14:46:33.596505 2845792 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:03 /usr/share/ca-certificates/26357852.pem
	I1121 14:46:33.596573 2845792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26357852.pem
	I1121 14:46:33.641310 2845792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/26357852.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:46:33.650758 2845792 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:46:33.655183 2845792 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:46:33.655285 2845792 kubeadm.go:401] StartCluster: {Name:embed-certs-695324 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-695324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:46:33.655391 2845792 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1121 14:46:33.655480 2845792 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:46:33.697520 2845792 cri.go:89] found id: ""
	I1121 14:46:33.697616 2845792 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:46:33.710358 2845792 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:46:33.727002 2845792 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:46:33.727078 2845792 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:46:33.739743 2845792 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:46:33.739769 2845792 kubeadm.go:158] found existing configuration files:
	
	I1121 14:46:33.739844 2845792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 14:46:33.749321 2845792 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:46:33.749401 2845792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:46:33.758298 2845792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 14:46:33.767612 2845792 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:46:33.767696 2845792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:46:33.777791 2845792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 14:46:33.790423 2845792 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:46:33.790505 2845792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:46:33.800394 2845792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 14:46:33.811598 2845792 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:46:33.811676 2845792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:46:33.821600 2845792 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:46:33.904758 2845792 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 14:46:33.905157 2845792 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:46:33.995601 2845792 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:46:33.995705 2845792 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1121 14:46:33.995746 2845792 kubeadm.go:319] OS: Linux
	I1121 14:46:33.995806 2845792 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:46:33.995859 2845792 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1121 14:46:33.995914 2845792 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:46:33.995967 2845792 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:46:33.996021 2845792 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:46:33.996078 2845792 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:46:33.996131 2845792 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:46:33.996185 2845792 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:46:33.996243 2845792 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1121 14:46:34.167135 2845792 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:46:34.167251 2845792 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:46:34.167355 2845792 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 14:46:34.192295 2845792 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:46:33.117130 2843875 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (3.927879936s)
	I1121 14:46:33.117159 2843875 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1121 14:46:33.117184 2843875 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:46:33.117246 2843875 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:46:33.634091 2843875 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1121 14:46:33.634130 2843875 cache_images.go:125] Successfully loaded all cached images
	I1121 14:46:33.634136 2843875 cache_images.go:94] duration metric: took 14.742456869s to LoadCachedImages
	I1121 14:46:33.634147 2843875 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1121 14:46:33.634240 2843875 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-208006 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-208006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:46:33.634311 2843875 ssh_runner.go:195] Run: sudo crictl info
	I1121 14:46:33.669782 2843875 cni.go:84] Creating CNI manager for ""
	I1121 14:46:33.669803 2843875 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:46:33.669821 2843875 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:46:33.669844 2843875 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-208006 NodeName:no-preload-208006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:46:33.669954 2843875 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-208006"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:46:33.670020 2843875 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:46:33.679009 2843875 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1121 14:46:33.679125 2843875 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1121 14:46:33.687573 2843875 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1121 14:46:33.687666 2843875 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1121 14:46:33.688405 2843875 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1121 14:46:33.688848 2843875 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1121 14:46:33.693447 2843875 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1121 14:46:33.693533 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1121 14:46:34.197160 2845792 out.go:252]   - Generating certificates and keys ...
	I1121 14:46:34.197309 2845792 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:46:34.197386 2845792 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:46:34.656022 2845792 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:46:34.758350 2845792 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:46:36.265068 2845792 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:46:34.669761 2843875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:46:34.686097 2843875 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1121 14:46:34.704593 2843875 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1121 14:46:34.704645 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1121 14:46:35.105312 2843875 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1121 14:46:35.119409 2843875 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1121 14:46:35.119462 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1121 14:46:35.550981 2843875 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:46:35.560747 2843875 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1121 14:46:35.575126 2843875 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:46:35.589278 2843875 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1121 14:46:35.603080 2843875 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:46:35.606888 2843875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:46:35.617122 2843875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:46:35.767604 2843875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:46:35.798388 2843875 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006 for IP: 192.168.85.2
	I1121 14:46:35.798462 2843875 certs.go:195] generating shared ca certs ...
	I1121 14:46:35.798494 2843875 certs.go:227] acquiring lock for ca certs: {Name:mk0a1b8efa9f1d453751b4f7afafeea16d7243a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:46:35.798691 2843875 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.key
	I1121 14:46:35.798761 2843875 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/proxy-client-ca.key
	I1121 14:46:35.798807 2843875 certs.go:257] generating profile certs ...
	I1121 14:46:35.798888 2843875 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.key
	I1121 14:46:35.798926 2843875 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.crt with IP's: []
	I1121 14:46:36.449366 2843875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.crt ...
	I1121 14:46:36.449399 2843875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.crt: {Name:mk063bf35af73b12fd837097b9d2c88810446514 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:46:36.449620 2843875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.key ...
	I1121 14:46:36.449635 2843875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.key: {Name:mkd5f39db09014633d4ad726504e48cbdcf85b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:46:36.449745 2843875 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/apiserver.key.78bb1819
	I1121 14:46:36.449765 2843875 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/apiserver.crt.78bb1819 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1121 14:46:37.246284 2843875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/apiserver.crt.78bb1819 ...
	I1121 14:46:37.246318 2843875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/apiserver.crt.78bb1819: {Name:mkcf688db387cf76c0d5ba22b7c31e12385c4418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:46:37.246491 2843875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/apiserver.key.78bb1819 ...
	I1121 14:46:37.246511 2843875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/apiserver.key.78bb1819: {Name:mkd2f362bf164b144bc910285230d554b2e7ebd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:46:37.246590 2843875 certs.go:382] copying /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/apiserver.crt.78bb1819 -> /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/apiserver.crt
	I1121 14:46:37.246676 2843875 certs.go:386] copying /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/apiserver.key.78bb1819 -> /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/apiserver.key
	I1121 14:46:37.246739 2843875 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/proxy-client.key
	I1121 14:46:37.246757 2843875 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/proxy-client.crt with IP's: []
	I1121 14:46:37.991881 2843875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/proxy-client.crt ...
	I1121 14:46:37.991913 2843875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/proxy-client.crt: {Name:mk751baab9333e8284a6eb2fdb2f2f3b200da788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:46:37.992854 2843875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/proxy-client.key ...
	I1121 14:46:37.992880 2843875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/proxy-client.key: {Name:mkb17b741c691928ceb9aa55ee605f0c11a03e8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:46:37.993129 2843875 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/2635785.pem (1338 bytes)
	W1121 14:46:37.993174 2843875 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/2635785_empty.pem, impossibly tiny 0 bytes
	I1121 14:46:37.993188 2843875 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:46:37.993213 2843875 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem (1082 bytes)
	I1121 14:46:37.993240 2843875 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:46:37.993267 2843875 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/key.pem (1679 bytes)
	I1121 14:46:37.993313 2843875 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem (1708 bytes)
	I1121 14:46:37.993989 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:46:38.017665 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 14:46:38.042525 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:46:38.064315 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:46:38.086528 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1121 14:46:38.106865 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:46:38.126635 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:46:38.148414 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:46:38.166333 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem --> /usr/share/ca-certificates/26357852.pem (1708 bytes)
	I1121 14:46:38.184300 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:46:38.201232 2843875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/2635785.pem --> /usr/share/ca-certificates/2635785.pem (1338 bytes)
	I1121 14:46:38.221109 2843875 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:46:38.233395 2843875 ssh_runner.go:195] Run: openssl version
	I1121 14:46:38.240061 2843875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26357852.pem && ln -fs /usr/share/ca-certificates/26357852.pem /etc/ssl/certs/26357852.pem"
	I1121 14:46:38.248653 2843875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26357852.pem
	I1121 14:46:38.252805 2843875 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:03 /usr/share/ca-certificates/26357852.pem
	I1121 14:46:38.252871 2843875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26357852.pem
	I1121 14:46:38.294173 2843875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/26357852.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:46:38.302471 2843875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:46:38.310378 2843875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:46:38.314794 2843875 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:46:38.314858 2843875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:46:38.359280 2843875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:46:38.367577 2843875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2635785.pem && ln -fs /usr/share/ca-certificates/2635785.pem /etc/ssl/certs/2635785.pem"
	I1121 14:46:38.375534 2843875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2635785.pem
	I1121 14:46:38.379603 2843875 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:03 /usr/share/ca-certificates/2635785.pem
	I1121 14:46:38.379668 2843875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2635785.pem
	I1121 14:46:38.421240 2843875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2635785.pem /etc/ssl/certs/51391683.0"
	I1121 14:46:38.429436 2843875 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:46:38.433758 2843875 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:46:38.433820 2843875 kubeadm.go:401] StartCluster: {Name:no-preload-208006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-208006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:46:38.433903 2843875 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1121 14:46:38.433969 2843875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:46:38.475790 2843875 cri.go:89] found id: ""
	I1121 14:46:38.475869 2843875 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:46:38.488361 2843875 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:46:38.496890 2843875 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:46:38.496998 2843875 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:46:38.507982 2843875 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:46:38.508053 2843875 kubeadm.go:158] found existing configuration files:
	
	I1121 14:46:38.508138 2843875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 14:46:38.517535 2843875 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:46:38.517645 2843875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:46:38.525673 2843875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 14:46:38.534510 2843875 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:46:38.534619 2843875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:46:38.542849 2843875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 14:46:38.552074 2843875 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:46:38.552210 2843875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:46:38.560433 2843875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 14:46:38.569912 2843875 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:46:38.570027 2843875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:46:38.578389 2843875 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:46:38.624113 2843875 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 14:46:38.624518 2843875 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:46:38.682239 2843875 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:46:38.682407 2843875 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1121 14:46:38.682466 2843875 kubeadm.go:319] OS: Linux
	I1121 14:46:38.682519 2843875 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:46:38.682573 2843875 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1121 14:46:38.682630 2843875 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:46:38.682684 2843875 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:46:38.682738 2843875 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:46:38.682792 2843875 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:46:38.682842 2843875 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:46:38.682896 2843875 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:46:38.682948 2843875 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1121 14:46:38.835113 2843875 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:46:38.835332 2843875 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:46:38.835465 2843875 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 14:46:38.841455 2843875 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:46:38.844479 2843875 out.go:252]   - Generating certificates and keys ...
	I1121 14:46:38.844648 2843875 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:46:38.844773 2843875 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:46:38.986880 2843875 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:46:36.801550 2845792 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:46:37.300113 2845792 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:46:37.300725 2845792 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-695324 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1121 14:46:37.573380 2845792 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:46:37.573534 2845792 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-695324 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1121 14:46:38.012202 2845792 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:46:38.354114 2845792 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:46:39.663730 2845792 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:46:39.664292 2845792 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:46:39.843739 2845792 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:46:40.412778 2845792 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 14:46:40.974962 2845792 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:46:42.413412 2845792 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:46:43.329405 2845792 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:46:43.329511 2845792 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:46:43.329587 2845792 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:46:39.646289 2843875 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:46:40.177826 2843875 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:46:40.404903 2843875 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:46:40.485842 2843875 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:46:40.486344 2843875 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-208006] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1121 14:46:40.970120 2843875 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:46:40.970651 2843875 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-208006] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1121 14:46:41.920062 2843875 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:46:41.996475 2843875 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:46:42.331149 2843875 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:46:42.331740 2843875 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:46:43.048833 2843875 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:46:43.876713 2843875 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 14:46:44.366177 2843875 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:46:44.731453 2843875 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:46:45.041964 2843875 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:46:45.043256 2843875 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:46:45.065472 2843875 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:46:43.334296 2845792 out.go:252]   - Booting up control plane ...
	I1121 14:46:43.334418 2845792 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:46:43.334506 2845792 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:46:43.334582 2845792 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:46:43.348757 2845792 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:46:43.348869 2845792 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 14:46:43.358443 2845792 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 14:46:43.359996 2845792 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:46:43.363852 2845792 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:46:43.541412 2845792 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 14:46:43.541542 2845792 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 14:46:44.545397 2845792 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00163784s
	I1121 14:46:44.546513 2845792 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 14:46:44.546870 2845792 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1121 14:46:44.547189 2845792 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 14:46:44.547988 2845792 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 14:46:45.085426 2843875 out.go:252]   - Booting up control plane ...
	I1121 14:46:45.085562 2843875 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:46:45.085647 2843875 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:46:45.085721 2843875 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:46:45.116243 2843875 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:46:45.116371 2843875 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 14:46:45.129238 2843875 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 14:46:45.129667 2843875 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:46:45.129946 2843875 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:46:45.405428 2843875 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 14:46:45.405634 2843875 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 14:46:46.406009 2843875 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00129206s
	I1121 14:46:46.409857 2843875 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 14:46:46.409961 2843875 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1121 14:46:46.410210 2843875 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 14:46:46.410303 2843875 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 14:46:50.703193 2845792 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.154782345s
	I1121 14:46:51.315615 2843875 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.90296799s
	I1121 14:46:53.873519 2845792 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 9.324853511s
	I1121 14:46:56.051480 2845792 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 11.501883316s
	I1121 14:46:56.073087 2845792 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:46:56.099118 2845792 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:46:56.132494 2845792 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:46:56.133032 2845792 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-695324 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:46:56.159005 2845792 kubeadm.go:319] [bootstrap-token] Using token: a7ezg3.gdvjif9wl2df503w
	I1121 14:46:56.162048 2845792 out.go:252]   - Configuring RBAC rules ...
	I1121 14:46:56.162174 2845792 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:46:56.169431 2845792 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:46:56.180210 2845792 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:46:56.185881 2845792 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:46:56.192881 2845792 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:46:56.198337 2845792 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:46:55.162116 2843875 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 8.748659951s
	I1121 14:46:56.411632 2843875 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.001392218s
	I1121 14:46:56.432704 2843875 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:46:56.449085 2843875 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:46:56.477134 2843875 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:46:56.477354 2843875 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-208006 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:46:56.500677 2843875 kubeadm.go:319] [bootstrap-token] Using token: 2hh7sh.k2pmbohz9s00r858
	I1121 14:46:56.458698 2845792 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:46:56.974991 2845792 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:46:57.461652 2845792 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:46:57.469747 2845792 kubeadm.go:319] 
	I1121 14:46:57.469827 2845792 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:46:57.469834 2845792 kubeadm.go:319] 
	I1121 14:46:57.469914 2845792 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:46:57.469919 2845792 kubeadm.go:319] 
	I1121 14:46:57.469946 2845792 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:46:57.470017 2845792 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:46:57.470071 2845792 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:46:57.470075 2845792 kubeadm.go:319] 
	I1121 14:46:57.470131 2845792 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:46:57.470135 2845792 kubeadm.go:319] 
	I1121 14:46:57.470185 2845792 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:46:57.470190 2845792 kubeadm.go:319] 
	I1121 14:46:57.470244 2845792 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:46:57.470322 2845792 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:46:57.470393 2845792 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:46:57.470398 2845792 kubeadm.go:319] 
	I1121 14:46:57.470486 2845792 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:46:57.470566 2845792 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:46:57.470570 2845792 kubeadm.go:319] 
	I1121 14:46:57.470658 2845792 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token a7ezg3.gdvjif9wl2df503w \
	I1121 14:46:57.470765 2845792 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d756a1c258e082bbc06f965046f24233900a8e069c2a9d29a764f0b68af739ae \
	I1121 14:46:57.470786 2845792 kubeadm.go:319] 	--control-plane 
	I1121 14:46:57.470796 2845792 kubeadm.go:319] 
	I1121 14:46:57.470885 2845792 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:46:57.470889 2845792 kubeadm.go:319] 
	I1121 14:46:57.470974 2845792 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token a7ezg3.gdvjif9wl2df503w \
	I1121 14:46:57.471081 2845792 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d756a1c258e082bbc06f965046f24233900a8e069c2a9d29a764f0b68af739ae 
	I1121 14:46:57.486002 2845792 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1121 14:46:57.486322 2845792 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1121 14:46:57.486474 2845792 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:46:57.486513 2845792 cni.go:84] Creating CNI manager for ""
	I1121 14:46:57.486538 2845792 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:46:57.489717 2845792 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:46:56.503726 2843875 out.go:252]   - Configuring RBAC rules ...
	I1121 14:46:56.503862 2843875 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:46:56.514681 2843875 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:46:56.533128 2843875 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:46:56.539940 2843875 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:46:56.544263 2843875 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:46:56.549341 2843875 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:46:56.820115 2843875 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:46:57.250795 2843875 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:46:57.821421 2843875 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:46:57.822927 2843875 kubeadm.go:319] 
	I1121 14:46:57.823003 2843875 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:46:57.823009 2843875 kubeadm.go:319] 
	I1121 14:46:57.823090 2843875 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:46:57.823095 2843875 kubeadm.go:319] 
	I1121 14:46:57.823121 2843875 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:46:57.823183 2843875 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:46:57.823236 2843875 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:46:57.823240 2843875 kubeadm.go:319] 
	I1121 14:46:57.823296 2843875 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:46:57.823300 2843875 kubeadm.go:319] 
	I1121 14:46:57.823349 2843875 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:46:57.823354 2843875 kubeadm.go:319] 
	I1121 14:46:57.823408 2843875 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:46:57.823487 2843875 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:46:57.823558 2843875 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:46:57.823563 2843875 kubeadm.go:319] 
	I1121 14:46:57.823650 2843875 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:46:57.823730 2843875 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:46:57.823735 2843875 kubeadm.go:319] 
	I1121 14:46:57.823822 2843875 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 2hh7sh.k2pmbohz9s00r858 \
	I1121 14:46:57.823930 2843875 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d756a1c258e082bbc06f965046f24233900a8e069c2a9d29a764f0b68af739ae \
	I1121 14:46:57.823952 2843875 kubeadm.go:319] 	--control-plane 
	I1121 14:46:57.823956 2843875 kubeadm.go:319] 
	I1121 14:46:57.824044 2843875 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:46:57.824049 2843875 kubeadm.go:319] 
	I1121 14:46:57.824134 2843875 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2hh7sh.k2pmbohz9s00r858 \
	I1121 14:46:57.824241 2843875 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d756a1c258e082bbc06f965046f24233900a8e069c2a9d29a764f0b68af739ae 
	I1121 14:46:57.830307 2843875 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1121 14:46:57.830715 2843875 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1121 14:46:57.830867 2843875 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:46:57.830890 2843875 cni.go:84] Creating CNI manager for ""
	I1121 14:46:57.830898 2843875 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:46:57.836892 2843875 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:46:57.839919 2843875 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:46:57.862803 2843875 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 14:46:57.862823 2843875 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:46:57.956818 2843875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:46:58.717141 2843875 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:46:58.717233 2843875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:46:58.717289 2843875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-208006 minikube.k8s.io/updated_at=2025_11_21T14_46_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=no-preload-208006 minikube.k8s.io/primary=true
	I1121 14:46:58.947531 2843875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:46:58.947600 2843875 ops.go:34] apiserver oom_adj: -16
	I1121 14:46:59.448023 2843875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:46:57.492834 2845792 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:46:57.513679 2845792 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 14:46:57.513700 2845792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:46:57.594379 2845792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:46:58.110101 2845792 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:46:58.110236 2845792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:46:58.110301 2845792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-695324 minikube.k8s.io/updated_at=2025_11_21T14_46_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=embed-certs-695324 minikube.k8s.io/primary=true
	I1121 14:46:58.497272 2845792 ops.go:34] apiserver oom_adj: -16
	I1121 14:46:58.497375 2845792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:46:58.997498 2845792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:46:59.497758 2845792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:46:59.998242 2845792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:47:00.497973 2845792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:47:00.997486 2845792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:46:59.947630 2843875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:47:00.447609 2843875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:47:00.947719 2843875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:47:01.447714 2843875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:47:01.947630 2843875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:47:02.448441 2843875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:47:02.672711 2843875 kubeadm.go:1114] duration metric: took 3.955529478s to wait for elevateKubeSystemPrivileges
	I1121 14:47:02.672740 2843875 kubeadm.go:403] duration metric: took 24.238924391s to StartCluster
	I1121 14:47:02.672757 2843875 settings.go:142] acquiring lock: {Name:mkd6064915932eca5a3b1d70feb4ec8240f340da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:47:02.672827 2843875 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-2633933/kubeconfig
	I1121 14:47:02.673605 2843875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/kubeconfig: {Name:mkd905aaf74d26e32c0b3e46a7edfbf13f4b98ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:47:02.673864 2843875 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:47:02.674015 2843875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:47:02.674289 2843875 config.go:182] Loaded profile config "no-preload-208006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:47:02.674274 2843875 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:47:02.674358 2843875 addons.go:70] Setting storage-provisioner=true in profile "no-preload-208006"
	I1121 14:47:02.674374 2843875 addons.go:239] Setting addon storage-provisioner=true in "no-preload-208006"
	I1121 14:47:02.674402 2843875 host.go:66] Checking if "no-preload-208006" exists ...
	I1121 14:47:02.674902 2843875 cli_runner.go:164] Run: docker container inspect no-preload-208006 --format={{.State.Status}}
	I1121 14:47:02.675141 2843875 addons.go:70] Setting default-storageclass=true in profile "no-preload-208006"
	I1121 14:47:02.675163 2843875 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-208006"
	I1121 14:47:02.675419 2843875 cli_runner.go:164] Run: docker container inspect no-preload-208006 --format={{.State.Status}}
	I1121 14:47:02.677171 2843875 out.go:179] * Verifying Kubernetes components...
	I1121 14:47:02.680137 2843875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:47:02.730022 2843875 addons.go:239] Setting addon default-storageclass=true in "no-preload-208006"
	I1121 14:47:02.730083 2843875 host.go:66] Checking if "no-preload-208006" exists ...
	I1121 14:47:02.730572 2843875 cli_runner.go:164] Run: docker container inspect no-preload-208006 --format={{.State.Status}}
	I1121 14:47:02.735133 2843875 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:47:01.498365 2845792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:47:01.998241 2845792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:47:02.498185 2845792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:47:02.998092 2845792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:47:03.413810 2845792 kubeadm.go:1114] duration metric: took 5.303623466s to wait for elevateKubeSystemPrivileges
	I1121 14:47:03.413860 2845792 kubeadm.go:403] duration metric: took 29.758567682s to StartCluster
	I1121 14:47:03.413879 2845792 settings.go:142] acquiring lock: {Name:mkd6064915932eca5a3b1d70feb4ec8240f340da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:47:03.413962 2845792 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-2633933/kubeconfig
	I1121 14:47:03.415375 2845792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/kubeconfig: {Name:mkd905aaf74d26e32c0b3e46a7edfbf13f4b98ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:47:03.415643 2845792 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:47:03.415854 2845792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:47:03.416140 2845792 config.go:182] Loaded profile config "embed-certs-695324": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:47:03.416182 2845792 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:47:03.416255 2845792 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-695324"
	I1121 14:47:03.416272 2845792 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-695324"
	I1121 14:47:03.416292 2845792 host.go:66] Checking if "embed-certs-695324" exists ...
	I1121 14:47:03.417160 2845792 cli_runner.go:164] Run: docker container inspect embed-certs-695324 --format={{.State.Status}}
	I1121 14:47:03.417339 2845792 addons.go:70] Setting default-storageclass=true in profile "embed-certs-695324"
	I1121 14:47:03.417357 2845792 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-695324"
	I1121 14:47:03.417650 2845792 cli_runner.go:164] Run: docker container inspect embed-certs-695324 --format={{.State.Status}}
	I1121 14:47:03.421273 2845792 out.go:179] * Verifying Kubernetes components...
	I1121 14:47:03.429557 2845792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:47:03.458436 2845792 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:47:02.738100 2843875 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:47:02.738123 2843875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:47:02.738190 2843875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:47:02.775748 2843875 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:47:02.775786 2843875 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:47:02.775868 2843875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:47:02.780929 2843875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36730 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/no-preload-208006/id_rsa Username:docker}
	I1121 14:47:02.804810 2843875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36730 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/no-preload-208006/id_rsa Username:docker}
	I1121 14:47:03.341268 2843875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:47:03.535040 2843875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:47:03.571630 2843875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:47:03.571761 2843875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:47:05.389096 2843875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.854021552s)
	I1121 14:47:05.389275 2843875 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.817499753s)
	I1121 14:47:05.396345 2843875 node_ready.go:35] waiting up to 6m0s for node "no-preload-208006" to be "Ready" ...
	I1121 14:47:05.389291 2843875 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.817636955s)
	I1121 14:47:05.396590 2843875 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1121 14:47:05.400761 2843875 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1121 14:47:03.463027 2845792 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:47:03.463049 2845792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:47:03.463115 2845792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-695324
	I1121 14:47:03.470965 2845792 addons.go:239] Setting addon default-storageclass=true in "embed-certs-695324"
	I1121 14:47:03.471020 2845792 host.go:66] Checking if "embed-certs-695324" exists ...
	I1121 14:47:03.471503 2845792 cli_runner.go:164] Run: docker container inspect embed-certs-695324 --format={{.State.Status}}
	I1121 14:47:03.505145 2845792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36735 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/embed-certs-695324/id_rsa Username:docker}
	I1121 14:47:03.512701 2845792 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:47:03.512724 2845792 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:47:03.512788 2845792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-695324
	I1121 14:47:03.539987 2845792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36735 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/embed-certs-695324/id_rsa Username:docker}
	I1121 14:47:04.239229 2845792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:47:04.331791 2845792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:47:04.332015 2845792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:47:04.676238 2845792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:47:05.928994 2845792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.689677774s)
	I1121 14:47:05.929075 2845792 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.597016543s)
	I1121 14:47:05.930142 2845792 node_ready.go:35] waiting up to 6m0s for node "embed-certs-695324" to be "Ready" ...
	I1121 14:47:05.930450 2845792 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.59855052s)
	I1121 14:47:05.930479 2845792 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1121 14:47:05.931701 2845792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.255385246s)
	I1121 14:47:05.973852 2845792 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1121 14:47:05.976766 2845792 addons.go:530] duration metric: took 2.560566265s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:47:05.403839 2843875 addons.go:530] duration metric: took 2.729549073s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1121 14:47:05.902198 2843875 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-208006" context rescaled to 1 replicas
	W1121 14:47:07.399560 2843875 node_ready.go:57] node "no-preload-208006" has "Ready":"False" status (will retry)
	I1121 14:47:06.435484 2845792 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-695324" context rescaled to 1 replicas
	W1121 14:47:07.933728 2845792 node_ready.go:57] node "embed-certs-695324" has "Ready":"False" status (will retry)
	W1121 14:47:10.433881 2845792 node_ready.go:57] node "embed-certs-695324" has "Ready":"False" status (will retry)
	W1121 14:47:09.899343 2843875 node_ready.go:57] node "no-preload-208006" has "Ready":"False" status (will retry)
	W1121 14:47:12.399222 2843875 node_ready.go:57] node "no-preload-208006" has "Ready":"False" status (will retry)
	W1121 14:47:12.434340 2845792 node_ready.go:57] node "embed-certs-695324" has "Ready":"False" status (will retry)
	W1121 14:47:14.933915 2845792 node_ready.go:57] node "embed-certs-695324" has "Ready":"False" status (will retry)
	W1121 14:47:14.899208 2843875 node_ready.go:57] node "no-preload-208006" has "Ready":"False" status (will retry)
	W1121 14:47:17.399141 2843875 node_ready.go:57] node "no-preload-208006" has "Ready":"False" status (will retry)
	I1121 14:47:17.899233 2843875 node_ready.go:49] node "no-preload-208006" is "Ready"
	I1121 14:47:17.899262 2843875 node_ready.go:38] duration metric: took 12.50288338s for node "no-preload-208006" to be "Ready" ...
	I1121 14:47:17.899276 2843875 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:47:17.899331 2843875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:47:17.919686 2843875 api_server.go:72] duration metric: took 15.24579215s to wait for apiserver process to appear ...
	I1121 14:47:17.919707 2843875 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:47:17.919728 2843875 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:47:17.927711 2843875 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1121 14:47:17.928802 2843875 api_server.go:141] control plane version: v1.34.1
	I1121 14:47:17.928833 2843875 api_server.go:131] duration metric: took 9.118631ms to wait for apiserver health ...
	I1121 14:47:17.928843 2843875 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:47:17.932252 2843875 system_pods.go:59] 8 kube-system pods found
	I1121 14:47:17.932284 2843875 system_pods.go:61] "coredns-66bc5c9577-685tb" [6803e67a-8098-45df-8806-57553f15a42b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:47:17.932290 2843875 system_pods.go:61] "etcd-no-preload-208006" [188d36d7-4f3c-4063-a524-ce832267b9a2] Running
	I1121 14:47:17.932297 2843875 system_pods.go:61] "kindnet-kcbj5" [60909558-fb73-4b08-a011-4d60cb8d5564] Running
	I1121 14:47:17.932302 2843875 system_pods.go:61] "kube-apiserver-no-preload-208006" [f3226245-c329-4a92-809f-db513eb9b685] Running
	I1121 14:47:17.932307 2843875 system_pods.go:61] "kube-controller-manager-no-preload-208006" [d70f56da-89fb-4e35-ad0a-5aaa6446b3da] Running
	I1121 14:47:17.932310 2843875 system_pods.go:61] "kube-proxy-9xgd7" [47e3faf6-8bcc-48ab-a34a-df6769a2ca1b] Running
	I1121 14:47:17.932314 2843875 system_pods.go:61] "kube-scheduler-no-preload-208006" [ea0ab1f0-7cf1-4599-b29c-473acbcfe4d0] Running
	I1121 14:47:17.932320 2843875 system_pods.go:61] "storage-provisioner" [8bb32e63-0669-499e-81f0-9e79f31c0762] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:47:17.932325 2843875 system_pods.go:74] duration metric: took 3.477176ms to wait for pod list to return data ...
	I1121 14:47:17.932333 2843875 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:47:17.935930 2843875 default_sa.go:45] found service account: "default"
	I1121 14:47:17.935955 2843875 default_sa.go:55] duration metric: took 3.616118ms for default service account to be created ...
	I1121 14:47:17.935965 2843875 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:47:17.939105 2843875 system_pods.go:86] 8 kube-system pods found
	I1121 14:47:17.939139 2843875 system_pods.go:89] "coredns-66bc5c9577-685tb" [6803e67a-8098-45df-8806-57553f15a42b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:47:17.939146 2843875 system_pods.go:89] "etcd-no-preload-208006" [188d36d7-4f3c-4063-a524-ce832267b9a2] Running
	I1121 14:47:17.939152 2843875 system_pods.go:89] "kindnet-kcbj5" [60909558-fb73-4b08-a011-4d60cb8d5564] Running
	I1121 14:47:17.939157 2843875 system_pods.go:89] "kube-apiserver-no-preload-208006" [f3226245-c329-4a92-809f-db513eb9b685] Running
	I1121 14:47:17.939166 2843875 system_pods.go:89] "kube-controller-manager-no-preload-208006" [d70f56da-89fb-4e35-ad0a-5aaa6446b3da] Running
	I1121 14:47:17.939171 2843875 system_pods.go:89] "kube-proxy-9xgd7" [47e3faf6-8bcc-48ab-a34a-df6769a2ca1b] Running
	I1121 14:47:17.939177 2843875 system_pods.go:89] "kube-scheduler-no-preload-208006" [ea0ab1f0-7cf1-4599-b29c-473acbcfe4d0] Running
	I1121 14:47:17.939183 2843875 system_pods.go:89] "storage-provisioner" [8bb32e63-0669-499e-81f0-9e79f31c0762] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:47:17.939214 2843875 retry.go:31] will retry after 197.430621ms: missing components: kube-dns
	I1121 14:47:18.144359 2843875 system_pods.go:86] 8 kube-system pods found
	I1121 14:47:18.144398 2843875 system_pods.go:89] "coredns-66bc5c9577-685tb" [6803e67a-8098-45df-8806-57553f15a42b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:47:18.144406 2843875 system_pods.go:89] "etcd-no-preload-208006" [188d36d7-4f3c-4063-a524-ce832267b9a2] Running
	I1121 14:47:18.144413 2843875 system_pods.go:89] "kindnet-kcbj5" [60909558-fb73-4b08-a011-4d60cb8d5564] Running
	I1121 14:47:18.144421 2843875 system_pods.go:89] "kube-apiserver-no-preload-208006" [f3226245-c329-4a92-809f-db513eb9b685] Running
	I1121 14:47:18.144427 2843875 system_pods.go:89] "kube-controller-manager-no-preload-208006" [d70f56da-89fb-4e35-ad0a-5aaa6446b3da] Running
	I1121 14:47:18.144430 2843875 system_pods.go:89] "kube-proxy-9xgd7" [47e3faf6-8bcc-48ab-a34a-df6769a2ca1b] Running
	I1121 14:47:18.144434 2843875 system_pods.go:89] "kube-scheduler-no-preload-208006" [ea0ab1f0-7cf1-4599-b29c-473acbcfe4d0] Running
	I1121 14:47:18.144444 2843875 system_pods.go:89] "storage-provisioner" [8bb32e63-0669-499e-81f0-9e79f31c0762] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:47:18.144459 2843875 retry.go:31] will retry after 339.966672ms: missing components: kube-dns
	I1121 14:47:18.489144 2843875 system_pods.go:86] 8 kube-system pods found
	I1121 14:47:18.489185 2843875 system_pods.go:89] "coredns-66bc5c9577-685tb" [6803e67a-8098-45df-8806-57553f15a42b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:47:18.489193 2843875 system_pods.go:89] "etcd-no-preload-208006" [188d36d7-4f3c-4063-a524-ce832267b9a2] Running
	I1121 14:47:18.489200 2843875 system_pods.go:89] "kindnet-kcbj5" [60909558-fb73-4b08-a011-4d60cb8d5564] Running
	I1121 14:47:18.489207 2843875 system_pods.go:89] "kube-apiserver-no-preload-208006" [f3226245-c329-4a92-809f-db513eb9b685] Running
	I1121 14:47:18.489220 2843875 system_pods.go:89] "kube-controller-manager-no-preload-208006" [d70f56da-89fb-4e35-ad0a-5aaa6446b3da] Running
	I1121 14:47:18.489229 2843875 system_pods.go:89] "kube-proxy-9xgd7" [47e3faf6-8bcc-48ab-a34a-df6769a2ca1b] Running
	I1121 14:47:18.489233 2843875 system_pods.go:89] "kube-scheduler-no-preload-208006" [ea0ab1f0-7cf1-4599-b29c-473acbcfe4d0] Running
	I1121 14:47:18.489244 2843875 system_pods.go:89] "storage-provisioner" [8bb32e63-0669-499e-81f0-9e79f31c0762] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:47:18.489267 2843875 retry.go:31] will retry after 358.331251ms: missing components: kube-dns
	I1121 14:47:18.851664 2843875 system_pods.go:86] 8 kube-system pods found
	I1121 14:47:18.851746 2843875 system_pods.go:89] "coredns-66bc5c9577-685tb" [6803e67a-8098-45df-8806-57553f15a42b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:47:18.851763 2843875 system_pods.go:89] "etcd-no-preload-208006" [188d36d7-4f3c-4063-a524-ce832267b9a2] Running
	I1121 14:47:18.851770 2843875 system_pods.go:89] "kindnet-kcbj5" [60909558-fb73-4b08-a011-4d60cb8d5564] Running
	I1121 14:47:18.851774 2843875 system_pods.go:89] "kube-apiserver-no-preload-208006" [f3226245-c329-4a92-809f-db513eb9b685] Running
	I1121 14:47:18.851779 2843875 system_pods.go:89] "kube-controller-manager-no-preload-208006" [d70f56da-89fb-4e35-ad0a-5aaa6446b3da] Running
	I1121 14:47:18.851783 2843875 system_pods.go:89] "kube-proxy-9xgd7" [47e3faf6-8bcc-48ab-a34a-df6769a2ca1b] Running
	I1121 14:47:18.851787 2843875 system_pods.go:89] "kube-scheduler-no-preload-208006" [ea0ab1f0-7cf1-4599-b29c-473acbcfe4d0] Running
	I1121 14:47:18.851792 2843875 system_pods.go:89] "storage-provisioner" [8bb32e63-0669-499e-81f0-9e79f31c0762] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:47:18.851810 2843875 system_pods.go:126] duration metric: took 915.838215ms to wait for k8s-apps to be running ...
	I1121 14:47:18.851824 2843875 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:47:18.851895 2843875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:47:18.869665 2843875 system_svc.go:56] duration metric: took 17.830478ms WaitForService to wait for kubelet
	I1121 14:47:18.869695 2843875 kubeadm.go:587] duration metric: took 16.195805917s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:47:18.869714 2843875 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:47:18.899947 2843875 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 14:47:18.899983 2843875 node_conditions.go:123] node cpu capacity is 2
	I1121 14:47:18.899998 2843875 node_conditions.go:105] duration metric: took 30.27319ms to run NodePressure ...
	I1121 14:47:18.900018 2843875 start.go:242] waiting for startup goroutines ...
	I1121 14:47:18.900035 2843875 start.go:247] waiting for cluster config update ...
	I1121 14:47:18.900047 2843875 start.go:256] writing updated cluster config ...
	I1121 14:47:18.900334 2843875 ssh_runner.go:195] Run: rm -f paused
	I1121 14:47:18.905576 2843875 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:47:18.925377 2843875 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-685tb" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:47:18.941379 2843875 pod_ready.go:94] pod "coredns-66bc5c9577-685tb" is "Ready"
	I1121 14:47:18.941411 2843875 pod_ready.go:86] duration metric: took 16.002584ms for pod "coredns-66bc5c9577-685tb" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:47:18.947135 2843875 pod_ready.go:83] waiting for pod "etcd-no-preload-208006" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:47:18.955621 2843875 pod_ready.go:94] pod "etcd-no-preload-208006" is "Ready"
	I1121 14:47:18.955649 2843875 pod_ready.go:86] duration metric: took 8.487092ms for pod "etcd-no-preload-208006" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:47:19.019716 2843875 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-208006" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:47:19.024921 2843875 pod_ready.go:94] pod "kube-apiserver-no-preload-208006" is "Ready"
	I1121 14:47:19.024950 2843875 pod_ready.go:86] duration metric: took 5.204379ms for pod "kube-apiserver-no-preload-208006" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:47:19.027589 2843875 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-208006" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:47:19.310481 2843875 pod_ready.go:94] pod "kube-controller-manager-no-preload-208006" is "Ready"
	I1121 14:47:19.310509 2843875 pod_ready.go:86] duration metric: took 282.895643ms for pod "kube-controller-manager-no-preload-208006" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:47:19.510554 2843875 pod_ready.go:83] waiting for pod "kube-proxy-9xgd7" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:47:19.909588 2843875 pod_ready.go:94] pod "kube-proxy-9xgd7" is "Ready"
	I1121 14:47:19.909617 2843875 pod_ready.go:86] duration metric: took 399.034288ms for pod "kube-proxy-9xgd7" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:47:20.110190 2843875 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-208006" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:47:20.509705 2843875 pod_ready.go:94] pod "kube-scheduler-no-preload-208006" is "Ready"
	I1121 14:47:20.509736 2843875 pod_ready.go:86] duration metric: took 399.520691ms for pod "kube-scheduler-no-preload-208006" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:47:20.509749 2843875 pod_ready.go:40] duration metric: took 1.604141714s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:47:20.570644 2843875 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1121 14:47:20.573889 2843875 out.go:179] * Done! kubectl is now configured to use "no-preload-208006" cluster and "default" namespace by default
	W1121 14:47:17.433813 2845792 node_ready.go:57] node "embed-certs-695324" has "Ready":"False" status (will retry)
	W1121 14:47:19.933958 2845792 node_ready.go:57] node "embed-certs-695324" has "Ready":"False" status (will retry)
	W1121 14:47:22.434325 2845792 node_ready.go:57] node "embed-certs-695324" has "Ready":"False" status (will retry)
	W1121 14:47:24.932758 2845792 node_ready.go:57] node "embed-certs-695324" has "Ready":"False" status (will retry)
	W1121 14:47:26.933974 2845792 node_ready.go:57] node "embed-certs-695324" has "Ready":"False" status (will retry)
	W1121 14:47:29.432964 2845792 node_ready.go:57] node "embed-certs-695324" has "Ready":"False" status (will retry)
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	26337959306e0       1611cd07b61d5       10 seconds ago      Running             busybox                   0                   0bc74ef149db8       busybox                                     default
	9a1d80f65a499       138784d87c9c5       15 seconds ago      Running             coredns                   0                   82f350a64c951       coredns-66bc5c9577-685tb                    kube-system
	b011fda5b154d       66749159455b3       16 seconds ago      Running             storage-provisioner       0                   0de8a2721d3b6       storage-provisioner                         kube-system
	f3dd66e01305a       b1a8c6f707935       27 seconds ago      Running             kindnet-cni               0                   5cdfbfdd8455a       kindnet-kcbj5                               kube-system
	7e57e7c8851a9       05baa95f5142d       29 seconds ago      Running             kube-proxy                0                   0272c44eda3b8       kube-proxy-9xgd7                            kube-system
	05bfdef30141a       b5f57ec6b9867       47 seconds ago      Running             kube-scheduler            0                   253d1c4b4cc62       kube-scheduler-no-preload-208006            kube-system
	e51ffcbc830b0       7eb2c6ff0c5a7       47 seconds ago      Running             kube-controller-manager   0                   4139c67d6c6dd       kube-controller-manager-no-preload-208006   kube-system
	670da2ec0c5a2       43911e833d64d       47 seconds ago      Running             kube-apiserver            0                   a533469dde9ca       kube-apiserver-no-preload-208006            kube-system
	8f30bcc0ffef6       a1894772a478e       47 seconds ago      Running             etcd                      0                   ace5ff2dd9929       etcd-no-preload-208006                      kube-system
	
	
	==> containerd <==
	Nov 21 14:47:18 no-preload-208006 containerd[760]: time="2025-11-21T14:47:18.197559859Z" level=info msg="CreateContainer within sandbox \"82f350a64c9516ca00bcab3002194bfe71c1b684cb5d2f4ad5470174cb5e3bd8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 21 14:47:18 no-preload-208006 containerd[760]: time="2025-11-21T14:47:18.199372795Z" level=info msg="StartContainer for \"b011fda5b154db9e2a0ab88b094fd30c123607858f7144eb0187d89bc6c74ac9\""
	Nov 21 14:47:18 no-preload-208006 containerd[760]: time="2025-11-21T14:47:18.200465014Z" level=info msg="connecting to shim b011fda5b154db9e2a0ab88b094fd30c123607858f7144eb0187d89bc6c74ac9" address="unix:///run/containerd/s/09f659852cedc7fa9b63e78a0785442734cb68d6f5faad8529006b1f2ee0c3b5" protocol=ttrpc version=3
	Nov 21 14:47:18 no-preload-208006 containerd[760]: time="2025-11-21T14:47:18.213625498Z" level=info msg="Container 9a1d80f65a499b5e66dbd87f28482a42a0845efa7b882a7e675d26b780ed885b: CDI devices from CRI Config.CDIDevices: []"
	Nov 21 14:47:18 no-preload-208006 containerd[760]: time="2025-11-21T14:47:18.245218362Z" level=info msg="CreateContainer within sandbox \"82f350a64c9516ca00bcab3002194bfe71c1b684cb5d2f4ad5470174cb5e3bd8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9a1d80f65a499b5e66dbd87f28482a42a0845efa7b882a7e675d26b780ed885b\""
	Nov 21 14:47:18 no-preload-208006 containerd[760]: time="2025-11-21T14:47:18.248711743Z" level=info msg="StartContainer for \"9a1d80f65a499b5e66dbd87f28482a42a0845efa7b882a7e675d26b780ed885b\""
	Nov 21 14:47:18 no-preload-208006 containerd[760]: time="2025-11-21T14:47:18.253780873Z" level=info msg="connecting to shim 9a1d80f65a499b5e66dbd87f28482a42a0845efa7b882a7e675d26b780ed885b" address="unix:///run/containerd/s/a26a29e70ca1d73b1909ba5807d7fc1f4c04ee12306bf6c5dae092582b93c1db" protocol=ttrpc version=3
	Nov 21 14:47:18 no-preload-208006 containerd[760]: time="2025-11-21T14:47:18.280939363Z" level=info msg="StartContainer for \"b011fda5b154db9e2a0ab88b094fd30c123607858f7144eb0187d89bc6c74ac9\" returns successfully"
	Nov 21 14:47:18 no-preload-208006 containerd[760]: time="2025-11-21T14:47:18.353652507Z" level=info msg="StartContainer for \"9a1d80f65a499b5e66dbd87f28482a42a0845efa7b882a7e675d26b780ed885b\" returns successfully"
	Nov 21 14:47:21 no-preload-208006 containerd[760]: time="2025-11-21T14:47:21.109017599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:0ea2f04f-bf79-4916-b7c0-6bb05a5c87d0,Namespace:default,Attempt:0,}"
	Nov 21 14:47:21 no-preload-208006 containerd[760]: time="2025-11-21T14:47:21.173965337Z" level=info msg="connecting to shim 0bc74ef149db867a422be0c95f9362b93b20c2265c9e527905068e0c3de37e4e" address="unix:///run/containerd/s/790cf03eccdd35758c66551a1d2548db57ab7665489dfecec5725dcd83c847e3" namespace=k8s.io protocol=ttrpc version=3
	Nov 21 14:47:21 no-preload-208006 containerd[760]: time="2025-11-21T14:47:21.233464780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:0ea2f04f-bf79-4916-b7c0-6bb05a5c87d0,Namespace:default,Attempt:0,} returns sandbox id \"0bc74ef149db867a422be0c95f9362b93b20c2265c9e527905068e0c3de37e4e\""
	Nov 21 14:47:21 no-preload-208006 containerd[760]: time="2025-11-21T14:47:21.238759061Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 21 14:47:23 no-preload-208006 containerd[760]: time="2025-11-21T14:47:23.372986515Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:47:23 no-preload-208006 containerd[760]: time="2025-11-21T14:47:23.374885103Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937186"
	Nov 21 14:47:23 no-preload-208006 containerd[760]: time="2025-11-21T14:47:23.377148307Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:47:23 no-preload-208006 containerd[760]: time="2025-11-21T14:47:23.380164828Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:47:23 no-preload-208006 containerd[760]: time="2025-11-21T14:47:23.380917044Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.1419312s"
	Nov 21 14:47:23 no-preload-208006 containerd[760]: time="2025-11-21T14:47:23.380960760Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 21 14:47:23 no-preload-208006 containerd[760]: time="2025-11-21T14:47:23.388873533Z" level=info msg="CreateContainer within sandbox \"0bc74ef149db867a422be0c95f9362b93b20c2265c9e527905068e0c3de37e4e\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 21 14:47:23 no-preload-208006 containerd[760]: time="2025-11-21T14:47:23.405911354Z" level=info msg="Container 26337959306e0f5e739c563b1f6adef1693f017dfb0173ccd8f364de6dcb6412: CDI devices from CRI Config.CDIDevices: []"
	Nov 21 14:47:23 no-preload-208006 containerd[760]: time="2025-11-21T14:47:23.418460015Z" level=info msg="CreateContainer within sandbox \"0bc74ef149db867a422be0c95f9362b93b20c2265c9e527905068e0c3de37e4e\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"26337959306e0f5e739c563b1f6adef1693f017dfb0173ccd8f364de6dcb6412\""
	Nov 21 14:47:23 no-preload-208006 containerd[760]: time="2025-11-21T14:47:23.419058512Z" level=info msg="StartContainer for \"26337959306e0f5e739c563b1f6adef1693f017dfb0173ccd8f364de6dcb6412\""
	Nov 21 14:47:23 no-preload-208006 containerd[760]: time="2025-11-21T14:47:23.419991991Z" level=info msg="connecting to shim 26337959306e0f5e739c563b1f6adef1693f017dfb0173ccd8f364de6dcb6412" address="unix:///run/containerd/s/790cf03eccdd35758c66551a1d2548db57ab7665489dfecec5725dcd83c847e3" protocol=ttrpc version=3
	Nov 21 14:47:23 no-preload-208006 containerd[760]: time="2025-11-21T14:47:23.497472062Z" level=info msg="StartContainer for \"26337959306e0f5e739c563b1f6adef1693f017dfb0173ccd8f364de6dcb6412\" returns successfully"
	
	
	==> coredns [9a1d80f65a499b5e66dbd87f28482a42a0845efa7b882a7e675d26b780ed885b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46660 - 36110 "HINFO IN 6973837720917439497.4433172359432745824. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027940457s
	
	
	==> describe nodes <==
	Name:               no-preload-208006
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-208006
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=no-preload-208006
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_46_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:46:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-208006
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:47:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:47:28 +0000   Fri, 21 Nov 2025 14:46:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:47:28 +0000   Fri, 21 Nov 2025 14:46:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:47:28 +0000   Fri, 21 Nov 2025 14:46:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:47:28 +0000   Fri, 21 Nov 2025 14:47:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-208006
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                f039ed5e-2d51-4016-b933-b720b8535aa9
	  Boot ID:                    41b0e09d-5a9a-49c9-8980-dca608ba3fce
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  kube-system                 coredns-66bc5c9577-685tb                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     31s
	  kube-system                 etcd-no-preload-208006                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         36s
	  kube-system                 kindnet-kcbj5                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-no-preload-208006             250m (12%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-no-preload-208006    200m (10%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-9xgd7                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-no-preload-208006             100m (5%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 28s   kube-proxy       
	  Normal   Starting                 37s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 37s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  37s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  36s   kubelet          Node no-preload-208006 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    36s   kubelet          Node no-preload-208006 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     36s   kubelet          Node no-preload-208006 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           32s   node-controller  Node no-preload-208006 event: Registered Node no-preload-208006 in Controller
	  Normal   NodeReady                17s   kubelet          Node no-preload-208006 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 13:02] overlayfs: idmapped layers are currently not supported
	[Nov21 13:03] overlayfs: idmapped layers are currently not supported
	[Nov21 13:06] overlayfs: idmapped layers are currently not supported
	[Nov21 13:08] overlayfs: idmapped layers are currently not supported
	[Nov21 13:09] overlayfs: idmapped layers are currently not supported
	[Nov21 13:10] overlayfs: idmapped layers are currently not supported
	[ +19.808801] overlayfs: idmapped layers are currently not supported
	[Nov21 13:11] overlayfs: idmapped layers are currently not supported
	[Nov21 13:12] overlayfs: idmapped layers are currently not supported
	[Nov21 13:13] overlayfs: idmapped layers are currently not supported
	[Nov21 13:14] overlayfs: idmapped layers are currently not supported
	[Nov21 13:15] overlayfs: idmapped layers are currently not supported
	[ +16.772572] overlayfs: idmapped layers are currently not supported
	[Nov21 13:16] overlayfs: idmapped layers are currently not supported
	[Nov21 13:17] overlayfs: idmapped layers are currently not supported
	[ +27.396777] overlayfs: idmapped layers are currently not supported
	[Nov21 13:18] overlayfs: idmapped layers are currently not supported
	[ +25.430119] overlayfs: idmapped layers are currently not supported
	[Nov21 13:19] overlayfs: idmapped layers are currently not supported
	[Nov21 13:20] overlayfs: idmapped layers are currently not supported
	[Nov21 13:21] overlayfs: idmapped layers are currently not supported
	[Nov21 13:22] overlayfs: idmapped layers are currently not supported
	[Nov21 13:23] overlayfs: idmapped layers are currently not supported
	[Nov21 13:24] overlayfs: idmapped layers are currently not supported
	[Nov21 13:55] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [8f30bcc0ffef68f33676c531a54c185943fd5843eeb062e2a7a47fc41ccff421] <==
	{"level":"warn","ts":"2025-11-21T14:46:52.265097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.291431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.319419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.357227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.390835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.402731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.424772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.457429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.505766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.533825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.561751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.592001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.618663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.663395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.681496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.702665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.725845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.745310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.762325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.782925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.800021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.813638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.839382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:53.007710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60644","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-21T14:47:03.837350Z","caller":"traceutil/trace.go:172","msg":"trace[1449625866] transaction","detail":"{read_only:false; response_revision:360; number_of_response:1; }","duration":"106.117683ms","start":"2025-11-21T14:47:03.731216Z","end":"2025-11-21T14:47:03.837334Z","steps":["trace[1449625866] 'process raft request'  (duration: 96.618427ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:47:34 up 19:30,  0 user,  load average: 4.13, 3.46, 2.93
	Linux no-preload-208006 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f3dd66e01305aa67da4fef766c626727d676c7ffe74473a1010270d904b974d1] <==
	I1121 14:47:07.423627       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:47:07.424602       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1121 14:47:07.426107       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:47:07.426264       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:47:07.426369       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:47:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:47:07.629595       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:47:07.629775       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:47:07.629851       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:47:07.631056       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1121 14:47:07.930389       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:47:07.930501       1 metrics.go:72] Registering metrics
	I1121 14:47:07.930644       1 controller.go:711] "Syncing nftables rules"
	I1121 14:47:17.637136       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:47:17.637190       1 main.go:301] handling current node
	I1121 14:47:27.629095       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:47:27.629130       1 main.go:301] handling current node
	
	
	==> kube-apiserver [670da2ec0c5a22405cd819ddba5cacc0165673f1fa923b5507091c8767428c9e] <==
	I1121 14:46:54.476844       1 controller.go:667] quota admission added evaluator for: namespaces
	E1121 14:46:54.483547       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1121 14:46:54.537600       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:46:54.539719       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1121 14:46:54.548534       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:46:54.548815       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 14:46:54.703050       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:46:55.076566       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1121 14:46:55.093359       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1121 14:46:55.093583       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:46:56.227298       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:46:56.305785       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:46:56.386601       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1121 14:46:56.394664       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1121 14:46:56.395983       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 14:46:56.401589       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:46:57.221291       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:46:57.228300       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 14:46:57.244684       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1121 14:46:57.256479       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1121 14:47:03.211276       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1121 14:47:03.407397       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:47:03.414140       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 14:47:03.581659       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1121 14:47:30.955479       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:49786: use of closed network connection
	
	
	==> kube-controller-manager [e51ffcbc830b08843be90ae4a5cbc20e3b6d6721e6d01983023416c9a7ebff67] <==
	I1121 14:47:02.340990       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-208006"
	I1121 14:47:02.341063       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1121 14:47:02.341122       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1121 14:47:02.341167       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1121 14:47:02.353345       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 14:47:02.357439       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:47:02.361191       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1121 14:47:02.362665       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1121 14:47:02.362688       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1121 14:47:02.362791       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1121 14:47:02.362810       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1121 14:47:02.362828       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1121 14:47:02.363161       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-208006" podCIDRs=["10.244.0.0/24"]
	I1121 14:47:02.363466       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:47:02.363519       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1121 14:47:02.363532       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1121 14:47:02.363556       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1121 14:47:02.363588       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 14:47:02.363597       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1121 14:47:02.363609       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1121 14:47:02.363622       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1121 14:47:02.363654       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 14:47:02.375264       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1121 14:47:18.902229       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I1121 14:47:22.343852       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [7e57e7c8851a9cc8ab9aae48190e5273f29aca6479946be08dd8ce6aae53eae4] <==
	I1121 14:47:05.018304       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:47:05.134803       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:47:05.240791       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:47:05.240841       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1121 14:47:05.240934       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:47:05.568945       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:47:05.569237       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:47:05.677217       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:47:05.677568       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:47:05.677588       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:47:05.682439       1 config.go:200] "Starting service config controller"
	I1121 14:47:05.682500       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:47:05.683699       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:47:05.683716       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:47:05.683900       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:47:05.683911       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:47:05.728327       1 config.go:309] "Starting node config controller"
	I1121 14:47:05.728346       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:47:05.728381       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:47:05.783427       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 14:47:05.784589       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 14:47:05.784633       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [05bfdef30141a8e21622a5df2d0b5fad2030cdf0b24ad8c65c35f99be64b97da] <==
	I1121 14:46:55.117337       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:46:55.124337       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1121 14:46:55.130702       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:46:55.130992       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:46:55.131144       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1121 14:46:55.158936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 14:46:55.159314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 14:46:55.159506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 14:46:55.159738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 14:46:55.159925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 14:46:55.160395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 14:46:55.160605       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 14:46:55.160797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 14:46:55.160977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:46:55.161170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 14:46:55.161339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 14:46:55.161498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 14:46:55.161762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 14:46:55.161964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 14:46:55.162234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 14:46:55.162392       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 14:46:55.162542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 14:46:55.162641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 14:46:55.163429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1121 14:46:56.631723       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:46:58 no-preload-208006 kubelet[2080]: I1121 14:46:58.436313    2080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-208006" podStartSLOduration=2.435938638 podStartE2EDuration="2.435938638s" podCreationTimestamp="2025-11-21 14:46:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:46:58.435761347 +0000 UTC m=+1.271249544" watchObservedRunningTime="2025-11-21 14:46:58.435938638 +0000 UTC m=+1.271426835"
	Nov 21 14:46:58 no-preload-208006 kubelet[2080]: I1121 14:46:58.505595    2080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-208006" podStartSLOduration=0.50556772 podStartE2EDuration="505.56772ms" podCreationTimestamp="2025-11-21 14:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:46:58.476902164 +0000 UTC m=+1.312390352" watchObservedRunningTime="2025-11-21 14:46:58.50556772 +0000 UTC m=+1.341055917"
	Nov 21 14:46:58 no-preload-208006 kubelet[2080]: I1121 14:46:58.505706    2080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-208006" podStartSLOduration=0.50570178 podStartE2EDuration="505.70178ms" podCreationTimestamp="2025-11-21 14:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:46:58.49872374 +0000 UTC m=+1.334211962" watchObservedRunningTime="2025-11-21 14:46:58.50570178 +0000 UTC m=+1.341189977"
	Nov 21 14:46:58 no-preload-208006 kubelet[2080]: I1121 14:46:58.575385    2080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-208006" podStartSLOduration=0.575366503 podStartE2EDuration="575.366503ms" podCreationTimestamp="2025-11-21 14:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:46:58.53272479 +0000 UTC m=+1.368212979" watchObservedRunningTime="2025-11-21 14:46:58.575366503 +0000 UTC m=+1.410854700"
	Nov 21 14:47:02 no-preload-208006 kubelet[2080]: I1121 14:47:02.337473    2080 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 21 14:47:02 no-preload-208006 kubelet[2080]: I1121 14:47:02.339571    2080 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 21 14:47:03 no-preload-208006 kubelet[2080]: I1121 14:47:03.591659    2080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60909558-fb73-4b08-a011-4d60cb8d5564-xtables-lock\") pod \"kindnet-kcbj5\" (UID: \"60909558-fb73-4b08-a011-4d60cb8d5564\") " pod="kube-system/kindnet-kcbj5"
	Nov 21 14:47:03 no-preload-208006 kubelet[2080]: I1121 14:47:03.592279    2080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60909558-fb73-4b08-a011-4d60cb8d5564-lib-modules\") pod \"kindnet-kcbj5\" (UID: \"60909558-fb73-4b08-a011-4d60cb8d5564\") " pod="kube-system/kindnet-kcbj5"
	Nov 21 14:47:03 no-preload-208006 kubelet[2080]: I1121 14:47:03.592700    2080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47e3faf6-8bcc-48ab-a34a-df6769a2ca1b-xtables-lock\") pod \"kube-proxy-9xgd7\" (UID: \"47e3faf6-8bcc-48ab-a34a-df6769a2ca1b\") " pod="kube-system/kube-proxy-9xgd7"
	Nov 21 14:47:03 no-preload-208006 kubelet[2080]: I1121 14:47:03.593001    2080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mttf5\" (UniqueName: \"kubernetes.io/projected/47e3faf6-8bcc-48ab-a34a-df6769a2ca1b-kube-api-access-mttf5\") pod \"kube-proxy-9xgd7\" (UID: \"47e3faf6-8bcc-48ab-a34a-df6769a2ca1b\") " pod="kube-system/kube-proxy-9xgd7"
	Nov 21 14:47:03 no-preload-208006 kubelet[2080]: I1121 14:47:03.597427    2080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47e3faf6-8bcc-48ab-a34a-df6769a2ca1b-lib-modules\") pod \"kube-proxy-9xgd7\" (UID: \"47e3faf6-8bcc-48ab-a34a-df6769a2ca1b\") " pod="kube-system/kube-proxy-9xgd7"
	Nov 21 14:47:03 no-preload-208006 kubelet[2080]: I1121 14:47:03.597668    2080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/60909558-fb73-4b08-a011-4d60cb8d5564-cni-cfg\") pod \"kindnet-kcbj5\" (UID: \"60909558-fb73-4b08-a011-4d60cb8d5564\") " pod="kube-system/kindnet-kcbj5"
	Nov 21 14:47:03 no-preload-208006 kubelet[2080]: I1121 14:47:03.597795    2080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs5vk\" (UniqueName: \"kubernetes.io/projected/60909558-fb73-4b08-a011-4d60cb8d5564-kube-api-access-cs5vk\") pod \"kindnet-kcbj5\" (UID: \"60909558-fb73-4b08-a011-4d60cb8d5564\") " pod="kube-system/kindnet-kcbj5"
	Nov 21 14:47:03 no-preload-208006 kubelet[2080]: I1121 14:47:03.597928    2080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/47e3faf6-8bcc-48ab-a34a-df6769a2ca1b-kube-proxy\") pod \"kube-proxy-9xgd7\" (UID: \"47e3faf6-8bcc-48ab-a34a-df6769a2ca1b\") " pod="kube-system/kube-proxy-9xgd7"
	Nov 21 14:47:03 no-preload-208006 kubelet[2080]: I1121 14:47:03.889961    2080 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 21 14:47:06 no-preload-208006 kubelet[2080]: I1121 14:47:06.044390    2080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9xgd7" podStartSLOduration=3.044370019 podStartE2EDuration="3.044370019s" podCreationTimestamp="2025-11-21 14:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:47:05.838086139 +0000 UTC m=+8.673574344" watchObservedRunningTime="2025-11-21 14:47:06.044370019 +0000 UTC m=+8.879858208"
	Nov 21 14:47:17 no-preload-208006 kubelet[2080]: I1121 14:47:17.665156    2080 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 21 14:47:17 no-preload-208006 kubelet[2080]: I1121 14:47:17.694913    2080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-kcbj5" podStartSLOduration=12.150175104 podStartE2EDuration="14.694896145s" podCreationTimestamp="2025-11-21 14:47:03 +0000 UTC" firstStartedPulling="2025-11-21 14:47:04.526643426 +0000 UTC m=+7.362131614" lastFinishedPulling="2025-11-21 14:47:07.071364466 +0000 UTC m=+9.906852655" observedRunningTime="2025-11-21 14:47:07.822139334 +0000 UTC m=+10.657627539" watchObservedRunningTime="2025-11-21 14:47:17.694896145 +0000 UTC m=+20.530384342"
	Nov 21 14:47:17 no-preload-208006 kubelet[2080]: I1121 14:47:17.750744    2080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6803e67a-8098-45df-8806-57553f15a42b-config-volume\") pod \"coredns-66bc5c9577-685tb\" (UID: \"6803e67a-8098-45df-8806-57553f15a42b\") " pod="kube-system/coredns-66bc5c9577-685tb"
	Nov 21 14:47:17 no-preload-208006 kubelet[2080]: I1121 14:47:17.750804    2080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8bb32e63-0669-499e-81f0-9e79f31c0762-tmp\") pod \"storage-provisioner\" (UID: \"8bb32e63-0669-499e-81f0-9e79f31c0762\") " pod="kube-system/storage-provisioner"
	Nov 21 14:47:17 no-preload-208006 kubelet[2080]: I1121 14:47:17.750835    2080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcjkc\" (UniqueName: \"kubernetes.io/projected/8bb32e63-0669-499e-81f0-9e79f31c0762-kube-api-access-tcjkc\") pod \"storage-provisioner\" (UID: \"8bb32e63-0669-499e-81f0-9e79f31c0762\") " pod="kube-system/storage-provisioner"
	Nov 21 14:47:17 no-preload-208006 kubelet[2080]: I1121 14:47:17.750873    2080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94bbc\" (UniqueName: \"kubernetes.io/projected/6803e67a-8098-45df-8806-57553f15a42b-kube-api-access-94bbc\") pod \"coredns-66bc5c9577-685tb\" (UID: \"6803e67a-8098-45df-8806-57553f15a42b\") " pod="kube-system/coredns-66bc5c9577-685tb"
	Nov 21 14:47:18 no-preload-208006 kubelet[2080]: I1121 14:47:18.896107    2080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-685tb" podStartSLOduration=15.896088731 podStartE2EDuration="15.896088731s" podCreationTimestamp="2025-11-21 14:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:47:18.848343083 +0000 UTC m=+21.683831288" watchObservedRunningTime="2025-11-21 14:47:18.896088731 +0000 UTC m=+21.731576920"
	Nov 21 14:47:20 no-preload-208006 kubelet[2080]: I1121 14:47:20.793215    2080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.793188847 podStartE2EDuration="15.793188847s" podCreationTimestamp="2025-11-21 14:47:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:47:18.935718419 +0000 UTC m=+21.771206624" watchObservedRunningTime="2025-11-21 14:47:20.793188847 +0000 UTC m=+23.628677052"
	Nov 21 14:47:20 no-preload-208006 kubelet[2080]: I1121 14:47:20.874024    2080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf45s\" (UniqueName: \"kubernetes.io/projected/0ea2f04f-bf79-4916-b7c0-6bb05a5c87d0-kube-api-access-bf45s\") pod \"busybox\" (UID: \"0ea2f04f-bf79-4916-b7c0-6bb05a5c87d0\") " pod="default/busybox"
	
	
	==> storage-provisioner [b011fda5b154db9e2a0ab88b094fd30c123607858f7144eb0187d89bc6c74ac9] <==
	I1121 14:47:18.329468       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1121 14:47:18.333461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:18.340406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:47:18.340633       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 14:47:18.343506       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-208006_f518bb2e-5096-4e91-877d-e8663ead43ae!
	I1121 14:47:18.345372       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"279faeae-517b-4016-875c-4c1bafb56dcc", APIVersion:"v1", ResourceVersion:"424", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-208006_f518bb2e-5096-4e91-877d-e8663ead43ae became leader
	W1121 14:47:18.358721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:18.370123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:47:18.444584       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-208006_f518bb2e-5096-4e91-877d-e8663ead43ae!
	W1121 14:47:20.373754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:20.380758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:22.384457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:22.389705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:24.394261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:24.402489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:26.406159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:26.411335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:28.415041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:28.422412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:30.425923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:30.430833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:32.438003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:32.443695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:34.448734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:34.456232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-208006 -n no-preload-208006
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-208006 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (14.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (15.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-695324 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2825f6cd-a93e-4f6a-9629-98e365849793] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2825f6cd-a93e-4f6a-9629-98e365849793] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003886226s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-695324 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-695324
helpers_test.go:243: (dbg) docker inspect embed-certs-695324:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d08adc104671e5a2326f3660dea61bbe233168f252b0159466f44e4544784af",
	        "Created": "2025-11-21T14:46:23.190114002Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2846872,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:46:23.272787248Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/8d08adc104671e5a2326f3660dea61bbe233168f252b0159466f44e4544784af/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d08adc104671e5a2326f3660dea61bbe233168f252b0159466f44e4544784af/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d08adc104671e5a2326f3660dea61bbe233168f252b0159466f44e4544784af/hosts",
	        "LogPath": "/var/lib/docker/containers/8d08adc104671e5a2326f3660dea61bbe233168f252b0159466f44e4544784af/8d08adc104671e5a2326f3660dea61bbe233168f252b0159466f44e4544784af-json.log",
	        "Name": "/embed-certs-695324",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-695324:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-695324",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d08adc104671e5a2326f3660dea61bbe233168f252b0159466f44e4544784af",
	                "LowerDir": "/var/lib/docker/overlay2/1e699f13d458d470622888b2cd3160c1356d7e28175cc94e8e7d65a75291934f-init/diff:/var/lib/docker/overlay2/789a4b9f9866e585907664b1eaf98d94438dbf699e0511f3ca5ba5ea682b005e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1e699f13d458d470622888b2cd3160c1356d7e28175cc94e8e7d65a75291934f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1e699f13d458d470622888b2cd3160c1356d7e28175cc94e8e7d65a75291934f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1e699f13d458d470622888b2cd3160c1356d7e28175cc94e8e7d65a75291934f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-695324",
	                "Source": "/var/lib/docker/volumes/embed-certs-695324/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-695324",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-695324",
	                "name.minikube.sigs.k8s.io": "embed-certs-695324",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7cfd48fa99a33d42f9354ec0af6e2d47c0e1c8f4132db299bbf711b63d443106",
	            "SandboxKey": "/var/run/docker/netns/7cfd48fa99a3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36735"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36736"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36739"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36737"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36738"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-695324": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:9b:4d:34:49:ea",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "afe33961296510bf0efcd9290091d3839ca2ca5115c07f5d48c5a394b64c12aa",
	                    "EndpointID": "091cecd5e96d4f9b023165e455d4da6687a97f98f9ec55423bcdf33f12d68c29",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-695324",
	                        "8d08adc10467"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-695324 -n embed-certs-695324
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-695324 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-695324 logs -n 25: (1.94246361s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ start   │ -p cert-expiration-184410 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-184410   │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ force-systemd-env-041746 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-041746 │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ delete  │ -p force-systemd-env-041746                                                                                                                                                                                                                         │ force-systemd-env-041746 │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:43 UTC │
	│ start   │ -p cert-options-035007 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-035007      │ jenkins │ v1.37.0 │ 21 Nov 25 14:43 UTC │ 21 Nov 25 14:43 UTC │
	│ ssh     │ cert-options-035007 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-035007      │ jenkins │ v1.37.0 │ 21 Nov 25 14:43 UTC │ 21 Nov 25 14:43 UTC │
	│ ssh     │ -p cert-options-035007 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-035007      │ jenkins │ v1.37.0 │ 21 Nov 25 14:43 UTC │ 21 Nov 25 14:43 UTC │
	│ delete  │ -p cert-options-035007                                                                                                                                                                                                                              │ cert-options-035007      │ jenkins │ v1.37.0 │ 21 Nov 25 14:43 UTC │ 21 Nov 25 14:43 UTC │
	│ start   │ -p old-k8s-version-092258 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:43 UTC │ 21 Nov 25 14:44 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-092258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:44 UTC │ 21 Nov 25 14:44 UTC │
	│ stop    │ -p old-k8s-version-092258 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:44 UTC │ 21 Nov 25 14:45 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-092258 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:45 UTC │ 21 Nov 25 14:45 UTC │
	│ start   │ -p old-k8s-version-092258 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:45 UTC │ 21 Nov 25 14:45 UTC │
	│ start   │ -p cert-expiration-184410 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-184410   │ jenkins │ v1.37.0 │ 21 Nov 25 14:45 UTC │ 21 Nov 25 14:46 UTC │
	│ delete  │ -p cert-expiration-184410                                                                                                                                                                                                                           │ cert-expiration-184410   │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │ 21 Nov 25 14:46 UTC │
	│ image   │ old-k8s-version-092258 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │ 21 Nov 25 14:46 UTC │
	│ pause   │ -p old-k8s-version-092258 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │ 21 Nov 25 14:46 UTC │
	│ start   │ -p no-preload-208006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-208006        │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │ 21 Nov 25 14:47 UTC │
	│ unpause │ -p old-k8s-version-092258 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │ 21 Nov 25 14:46 UTC │
	│ delete  │ -p old-k8s-version-092258                                                                                                                                                                                                                           │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │ 21 Nov 25 14:46 UTC │
	│ delete  │ -p old-k8s-version-092258                                                                                                                                                                                                                           │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │ 21 Nov 25 14:46 UTC │
	│ start   │ -p embed-certs-695324 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-695324       │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │ 21 Nov 25 14:47 UTC │
	│ addons  │ enable metrics-server -p no-preload-208006 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-208006        │ jenkins │ v1.37.0 │ 21 Nov 25 14:47 UTC │ 21 Nov 25 14:47 UTC │
	│ stop    │ -p no-preload-208006 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-208006        │ jenkins │ v1.37.0 │ 21 Nov 25 14:47 UTC │ 21 Nov 25 14:47 UTC │
	│ addons  │ enable dashboard -p no-preload-208006 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-208006        │ jenkins │ v1.37.0 │ 21 Nov 25 14:47 UTC │ 21 Nov 25 14:47 UTC │
	│ start   │ -p no-preload-208006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-208006        │ jenkins │ v1.37.0 │ 21 Nov 25 14:47 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:47:48
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:47:48.921466 2852540 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:47:48.921704 2852540 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:47:48.921711 2852540 out.go:374] Setting ErrFile to fd 2...
	I1121 14:47:48.921716 2852540 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:47:48.921981 2852540 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-2633933/.minikube/bin
	I1121 14:47:48.922363 2852540 out.go:368] Setting JSON to false
	I1121 14:47:48.923413 2852540 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":70217,"bootTime":1763666252,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1121 14:47:48.923475 2852540 start.go:143] virtualization:  
	I1121 14:47:48.927756 2852540 out.go:179] * [no-preload-208006] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 14:47:48.931082 2852540 notify.go:221] Checking for updates...
	I1121 14:47:48.931764 2852540 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:47:48.935178 2852540 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:47:48.938992 2852540 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-2633933/kubeconfig
	I1121 14:47:48.941925 2852540 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-2633933/.minikube
	I1121 14:47:48.945494 2852540 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 14:47:48.948614 2852540 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:47:48.952503 2852540 config.go:182] Loaded profile config "no-preload-208006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:47:48.953355 2852540 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:47:49.012984 2852540 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 14:47:49.013162 2852540 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:47:49.091388 2852540 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-21 14:47:49.071805683 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:47:49.091493 2852540 docker.go:319] overlay module found
	I1121 14:47:49.094779 2852540 out.go:179] * Using the docker driver based on existing profile
	I1121 14:47:49.097649 2852540 start.go:309] selected driver: docker
	I1121 14:47:49.097673 2852540 start.go:930] validating driver "docker" against &{Name:no-preload-208006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-208006 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:47:49.097899 2852540 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:47:49.098849 2852540 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:47:49.160948 2852540 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-21 14:47:49.151076381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:47:49.161297 2852540 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:47:49.161336 2852540 cni.go:84] Creating CNI manager for ""
	I1121 14:47:49.161392 2852540 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:47:49.161437 2852540 start.go:353] cluster config:
	{Name:no-preload-208006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-208006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:47:49.164612 2852540 out.go:179] * Starting "no-preload-208006" primary control-plane node in "no-preload-208006" cluster
	I1121 14:47:49.167640 2852540 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1121 14:47:49.170475 2852540 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:47:49.173451 2852540 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:47:49.173487 2852540 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:47:49.173596 2852540 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/config.json ...
	I1121 14:47:49.173923 2852540 cache.go:107] acquiring lock: {Name:mkcf17144ebb9e4cf17530599113a88357efaad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:47:49.173998 2852540 cache.go:115] /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1121 14:47:49.174006 2852540 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 101.364µs
	I1121 14:47:49.174014 2852540 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1121 14:47:49.174024 2852540 cache.go:107] acquiring lock: {Name:mk2c8a0b13865bbc2485475059a5351d81bfa5fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:47:49.174053 2852540 cache.go:115] /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1121 14:47:49.174058 2852540 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 35.437µs
	I1121 14:47:49.174064 2852540 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1121 14:47:49.174073 2852540 cache.go:107] acquiring lock: {Name:mk6f5cd4ae8f112091906498a9fbf13737fb7e1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:47:49.174100 2852540 cache.go:115] /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1121 14:47:49.174104 2852540 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 32.942µs
	I1121 14:47:49.174110 2852540 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1121 14:47:49.174118 2852540 cache.go:107] acquiring lock: {Name:mka7ae7f4e3f1ac28c23fec2ca16322477d17392 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:47:49.174151 2852540 cache.go:115] /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1121 14:47:49.174155 2852540 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 38.292µs
	I1121 14:47:49.174161 2852540 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1121 14:47:49.174169 2852540 cache.go:107] acquiring lock: {Name:mk650c489015758ee052bc4f95260ad256ed811e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:47:49.174201 2852540 cache.go:115] /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1121 14:47:49.174206 2852540 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 37.817µs
	I1121 14:47:49.174212 2852540 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1121 14:47:49.174225 2852540 cache.go:107] acquiring lock: {Name:mk491b2c8c5942ed4dbe7004b6031cb6a3cfabea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:47:49.174255 2852540 cache.go:115] /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1121 14:47:49.174259 2852540 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 39.408µs
	I1121 14:47:49.174264 2852540 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1121 14:47:49.174272 2852540 cache.go:107] acquiring lock: {Name:mk63ee20fc68a0d9784f692623ea82f65e136c59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:47:49.174297 2852540 cache.go:115] /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1121 14:47:49.174301 2852540 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 30.12µs
	I1121 14:47:49.174306 2852540 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1121 14:47:49.174314 2852540 cache.go:107] acquiring lock: {Name:mk59bedcab9fe96ad70c9894cfda067036228dad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:47:49.174339 2852540 cache.go:115] /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1121 14:47:49.174344 2852540 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 31.351µs
	I1121 14:47:49.174350 2852540 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1121 14:47:49.174355 2852540 cache.go:87] Successfully saved all images to host disk.
	I1121 14:47:49.192793 2852540 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:47:49.192817 2852540 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:47:49.192836 2852540 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:47:49.192860 2852540 start.go:360] acquireMachinesLock for no-preload-208006: {Name:mkbee8c16de6300bba99d6e61014d756b275729d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:47:49.192922 2852540 start.go:364] duration metric: took 42.223µs to acquireMachinesLock for "no-preload-208006"
	I1121 14:47:49.192944 2852540 start.go:96] Skipping create...Using existing machine configuration
	I1121 14:47:49.192950 2852540 fix.go:54] fixHost starting: 
	I1121 14:47:49.193250 2852540 cli_runner.go:164] Run: docker container inspect no-preload-208006 --format={{.State.Status}}
	I1121 14:47:49.210897 2852540 fix.go:112] recreateIfNeeded on no-preload-208006: state=Stopped err=<nil>
	W1121 14:47:49.210930 2852540 fix.go:138] unexpected machine state, will restart: <nil>
	I1121 14:47:49.214285 2852540 out.go:252] * Restarting existing docker container for "no-preload-208006" ...
	I1121 14:47:49.214371 2852540 cli_runner.go:164] Run: docker start no-preload-208006
	I1121 14:47:49.473472 2852540 cli_runner.go:164] Run: docker container inspect no-preload-208006 --format={{.State.Status}}
	I1121 14:47:49.497816 2852540 kic.go:430] container "no-preload-208006" state is running.
	I1121 14:47:49.500483 2852540 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-208006
	I1121 14:47:49.526844 2852540 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/config.json ...
	I1121 14:47:49.527093 2852540 machine.go:94] provisionDockerMachine start ...
	I1121 14:47:49.527155 2852540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:47:49.549340 2852540 main.go:143] libmachine: Using SSH client type: native
	I1121 14:47:49.549680 2852540 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 36740 <nil> <nil>}
	I1121 14:47:49.549689 2852540 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:47:49.551077 2852540 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1121 14:47:52.696828 2852540 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-208006
	
	I1121 14:47:52.696852 2852540 ubuntu.go:182] provisioning hostname "no-preload-208006"
	I1121 14:47:52.696940 2852540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:47:52.716119 2852540 main.go:143] libmachine: Using SSH client type: native
	I1121 14:47:52.716430 2852540 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 36740 <nil> <nil>}
	I1121 14:47:52.716447 2852540 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-208006 && echo "no-preload-208006" | sudo tee /etc/hostname
	I1121 14:47:52.871469 2852540 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-208006
	
	I1121 14:47:52.871556 2852540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:47:52.889686 2852540 main.go:143] libmachine: Using SSH client type: native
	I1121 14:47:52.890002 2852540 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 36740 <nil> <nil>}
	I1121 14:47:52.890025 2852540 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-208006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-208006/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-208006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:47:53.029653 2852540 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:47:53.029681 2852540 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-2633933/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-2633933/.minikube}
	I1121 14:47:53.029712 2852540 ubuntu.go:190] setting up certificates
	I1121 14:47:53.029722 2852540 provision.go:84] configureAuth start
	I1121 14:47:53.029785 2852540 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-208006
	I1121 14:47:53.047711 2852540 provision.go:143] copyHostCerts
	I1121 14:47:53.047780 2852540 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-2633933/.minikube/cert.pem, removing ...
	I1121 14:47:53.047798 2852540 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-2633933/.minikube/cert.pem
	I1121 14:47:53.047878 2852540 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-2633933/.minikube/cert.pem (1123 bytes)
	I1121 14:47:53.047979 2852540 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-2633933/.minikube/key.pem, removing ...
	I1121 14:47:53.047985 2852540 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-2633933/.minikube/key.pem
	I1121 14:47:53.048011 2852540 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-2633933/.minikube/key.pem (1679 bytes)
	I1121 14:47:53.048102 2852540 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.pem, removing ...
	I1121 14:47:53.048112 2852540 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.pem
	I1121 14:47:53.048137 2852540 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.pem (1082 bytes)
	I1121 14:47:53.048189 2852540 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca-key.pem org=jenkins.no-preload-208006 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-208006]
	I1121 14:47:53.754597 2852540 provision.go:177] copyRemoteCerts
	I1121 14:47:53.754675 2852540 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:47:53.754722 2852540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:47:53.774910 2852540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36740 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/no-preload-208006/id_rsa Username:docker}
	I1121 14:47:53.876839 2852540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1121 14:47:53.895889 2852540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1121 14:47:53.913553 2852540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 14:47:53.932064 2852540 provision.go:87] duration metric: took 902.327151ms to configureAuth
	I1121 14:47:53.932130 2852540 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:47:53.932362 2852540 config.go:182] Loaded profile config "no-preload-208006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:47:53.932376 2852540 machine.go:97] duration metric: took 4.405275171s to provisionDockerMachine
	I1121 14:47:53.932385 2852540 start.go:293] postStartSetup for "no-preload-208006" (driver="docker")
	I1121 14:47:53.932394 2852540 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:47:53.932450 2852540 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:47:53.932505 2852540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:47:53.950080 2852540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36740 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/no-preload-208006/id_rsa Username:docker}
	I1121 14:47:54.049389 2852540 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:47:54.052853 2852540 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:47:54.052885 2852540 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:47:54.052897 2852540 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-2633933/.minikube/addons for local assets ...
	I1121 14:47:54.052954 2852540 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-2633933/.minikube/files for local assets ...
	I1121 14:47:54.053077 2852540 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem -> 26357852.pem in /etc/ssl/certs
	I1121 14:47:54.053215 2852540 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:47:54.060803 2852540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem --> /etc/ssl/certs/26357852.pem (1708 bytes)
	I1121 14:47:54.079615 2852540 start.go:296] duration metric: took 147.212809ms for postStartSetup
	I1121 14:47:54.079696 2852540 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:47:54.079736 2852540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:47:54.104073 2852540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36740 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/no-preload-208006/id_rsa Username:docker}
	I1121 14:47:54.202133 2852540 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:47:54.206805 2852540 fix.go:56] duration metric: took 5.013847847s for fixHost
	I1121 14:47:54.206860 2852540 start.go:83] releasing machines lock for "no-preload-208006", held for 5.013897863s
	I1121 14:47:54.206929 2852540 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-208006
	I1121 14:47:54.223643 2852540 ssh_runner.go:195] Run: cat /version.json
	I1121 14:47:54.223647 2852540 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:47:54.223712 2852540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:47:54.223755 2852540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:47:54.250870 2852540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36740 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/no-preload-208006/id_rsa Username:docker}
	I1121 14:47:54.252248 2852540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36740 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/no-preload-208006/id_rsa Username:docker}
	I1121 14:47:54.353921 2852540 ssh_runner.go:195] Run: systemctl --version
	I1121 14:47:54.470187 2852540 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:47:54.475176 2852540 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:47:54.475286 2852540 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:47:54.485674 2852540 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1121 14:47:54.485699 2852540 start.go:496] detecting cgroup driver to use...
	I1121 14:47:54.485733 2852540 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 14:47:54.485788 2852540 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1121 14:47:54.503580 2852540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1121 14:47:54.518225 2852540 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:47:54.518301 2852540 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:47:54.535071 2852540 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:47:54.549959 2852540 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:47:54.673655 2852540 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:47:54.793907 2852540 docker.go:234] disabling docker service ...
	I1121 14:47:54.794034 2852540 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:47:54.810997 2852540 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:47:54.824239 2852540 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:47:54.935125 2852540 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:47:55.062499 2852540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:47:55.075779 2852540 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:47:55.096949 2852540 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1121 14:47:55.108652 2852540 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1121 14:47:55.118394 2852540 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1121 14:47:55.118539 2852540 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1121 14:47:55.128557 2852540 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:47:55.140188 2852540 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1121 14:47:55.149880 2852540 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:47:55.159944 2852540 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:47:55.169676 2852540 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1121 14:47:55.179345 2852540 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1121 14:47:55.189541 2852540 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1121 14:47:55.199278 2852540 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:47:55.208191 2852540 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:47:55.216097 2852540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:47:55.345014 2852540 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1121 14:47:55.503356 2852540 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1121 14:47:55.503477 2852540 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1121 14:47:55.507454 2852540 start.go:564] Will wait 60s for crictl version
	I1121 14:47:55.507569 2852540 ssh_runner.go:195] Run: which crictl
	I1121 14:47:55.514434 2852540 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:47:55.539911 2852540 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1121 14:47:55.540031 2852540 ssh_runner.go:195] Run: containerd --version
	I1121 14:47:55.571210 2852540 ssh_runner.go:195] Run: containerd --version
	I1121 14:47:55.602049 2852540 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1121 14:47:55.605190 2852540 cli_runner.go:164] Run: docker network inspect no-preload-208006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:47:55.622969 2852540 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1121 14:47:55.626857 2852540 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:47:55.636527 2852540 kubeadm.go:884] updating cluster {Name:no-preload-208006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-208006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:47:55.636656 2852540 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:47:55.636710 2852540 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:47:55.662144 2852540 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:47:55.662170 2852540 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:47:55.662178 2852540 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1121 14:47:55.662278 2852540 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-208006 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-208006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:47:55.662346 2852540 ssh_runner.go:195] Run: sudo crictl info
	I1121 14:47:55.690138 2852540 cni.go:84] Creating CNI manager for ""
	I1121 14:47:55.690159 2852540 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:47:55.690177 2852540 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:47:55.690199 2852540 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-208006 NodeName:no-preload-208006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:47:55.690326 2852540 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-208006"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:47:55.690401 2852540 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:47:55.699042 2852540 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:47:55.699125 2852540 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:47:55.706607 2852540 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1121 14:47:55.719999 2852540 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:47:55.732470 2852540 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1121 14:47:55.745991 2852540 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:47:55.749657 2852540 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:47:55.759125 2852540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:47:55.872400 2852540 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:47:55.888513 2852540 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006 for IP: 192.168.85.2
	I1121 14:47:55.888532 2852540 certs.go:195] generating shared ca certs ...
	I1121 14:47:55.888548 2852540 certs.go:227] acquiring lock for ca certs: {Name:mk0a1b8efa9f1d453751b4f7afafeea16d7243a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:47:55.888693 2852540 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.key
	I1121 14:47:55.888731 2852540 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/proxy-client-ca.key
	I1121 14:47:55.888739 2852540 certs.go:257] generating profile certs ...
	I1121 14:47:55.888830 2852540 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.key
	I1121 14:47:55.888890 2852540 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/apiserver.key.78bb1819
	I1121 14:47:55.888933 2852540 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/proxy-client.key
	I1121 14:47:55.889073 2852540 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/2635785.pem (1338 bytes)
	W1121 14:47:55.889104 2852540 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/2635785_empty.pem, impossibly tiny 0 bytes
	I1121 14:47:55.889113 2852540 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:47:55.889140 2852540 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem (1082 bytes)
	I1121 14:47:55.889168 2852540 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:47:55.889190 2852540 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/key.pem (1679 bytes)
	I1121 14:47:55.889233 2852540 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem (1708 bytes)
	I1121 14:47:55.889868 2852540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:47:55.909424 2852540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 14:47:55.927725 2852540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:47:55.945553 2852540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:47:55.963355 2852540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1121 14:47:55.996425 2852540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:47:56.023401 2852540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:47:56.062566 2852540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:47:56.101527 2852540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem --> /usr/share/ca-certificates/26357852.pem (1708 bytes)
	I1121 14:47:56.124150 2852540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:47:56.143823 2852540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/2635785.pem --> /usr/share/ca-certificates/2635785.pem (1338 bytes)
	I1121 14:47:56.164251 2852540 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:47:56.177953 2852540 ssh_runner.go:195] Run: openssl version
	I1121 14:47:56.186725 2852540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:47:56.198025 2852540 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:47:56.202911 2852540 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:47:56.202976 2852540 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:47:56.255504 2852540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:47:56.264092 2852540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2635785.pem && ln -fs /usr/share/ca-certificates/2635785.pem /etc/ssl/certs/2635785.pem"
	I1121 14:47:56.272216 2852540 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2635785.pem
	I1121 14:47:56.275921 2852540 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:03 /usr/share/ca-certificates/2635785.pem
	I1121 14:47:56.275988 2852540 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2635785.pem
	I1121 14:47:56.316927 2852540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2635785.pem /etc/ssl/certs/51391683.0"
	I1121 14:47:56.324682 2852540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26357852.pem && ln -fs /usr/share/ca-certificates/26357852.pem /etc/ssl/certs/26357852.pem"
	I1121 14:47:56.332971 2852540 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26357852.pem
	I1121 14:47:56.336786 2852540 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:03 /usr/share/ca-certificates/26357852.pem
	I1121 14:47:56.336902 2852540 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26357852.pem
	I1121 14:47:56.377763 2852540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/26357852.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:47:56.386236 2852540 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:47:56.390166 2852540 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1121 14:47:56.431383 2852540 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1121 14:47:56.474709 2852540 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1121 14:47:56.519652 2852540 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1121 14:47:56.572147 2852540 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1121 14:47:56.623334 2852540 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1121 14:47:56.716330 2852540 kubeadm.go:401] StartCluster: {Name:no-preload-208006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-208006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:47:56.716421 2852540 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1121 14:47:56.716493 2852540 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:47:56.792581 2852540 cri.go:89] found id: "9a1d80f65a499b5e66dbd87f28482a42a0845efa7b882a7e675d26b780ed885b"
	I1121 14:47:56.792607 2852540 cri.go:89] found id: "b011fda5b154db9e2a0ab88b094fd30c123607858f7144eb0187d89bc6c74ac9"
	I1121 14:47:56.792611 2852540 cri.go:89] found id: "f3dd66e01305aa67da4fef766c626727d676c7ffe74473a1010270d904b974d1"
	I1121 14:47:56.792621 2852540 cri.go:89] found id: "7e57e7c8851a9cc8ab9aae48190e5273f29aca6479946be08dd8ce6aae53eae4"
	I1121 14:47:56.792625 2852540 cri.go:89] found id: "05bfdef30141a8e21622a5df2d0b5fad2030cdf0b24ad8c65c35f99be64b97da"
	I1121 14:47:56.792640 2852540 cri.go:89] found id: "e51ffcbc830b08843be90ae4a5cbc20e3b6d6721e6d01983023416c9a7ebff67"
	I1121 14:47:56.792644 2852540 cri.go:89] found id: "670da2ec0c5a22405cd819ddba5cacc0165673f1fa923b5507091c8767428c9e"
	I1121 14:47:56.792647 2852540 cri.go:89] found id: "8f30bcc0ffef68f33676c531a54c185943fd5843eeb062e2a7a47fc41ccff421"
	I1121 14:47:56.792650 2852540 cri.go:89] found id: ""
	I1121 14:47:56.792697 2852540 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1121 14:47:56.838753 2852540 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"78fc78989bb3da213502a60531419000303d98f02aa9059a0dc4195ead8b64d1","pid":0,"status":"stopped","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/78fc78989bb3da213502a60531419000303d98f02aa9059a0dc4195ead8b64d1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/78fc78989bb3da213502a60531419000303d98f02aa9059a0dc4195ead8b64d1/rootfs","created":"0001-01-01T00:00:00Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"78fc78989bb3da213502a60531419000303d98f02aa9059a0dc4195ead8b64d1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-208006_749c23d679ae75c87c21138a837c7997","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.
cri.sandbox-name":"etcd-no-preload-208006","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"749c23d679ae75c87c21138a837c7997"},"owner":"root"},{"ociVersion":"1.2.1","id":"dbeb041c180bfd8efe1a8378342d4b1cc19fc87407e31aa23f81d737902ef8f6","pid":924,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dbeb041c180bfd8efe1a8378342d4b1cc19fc87407e31aa23f81d737902ef8f6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dbeb041c180bfd8efe1a8378342d4b1cc19fc87407e31aa23f81d737902ef8f6/rootfs","created":"0001-01-01T00:00:00Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"dbeb041c180bfd8efe1a8378342d4b1cc19fc87407e31aa23f81d737902ef8f6","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/k
ube-system_kube-controller-manager-no-preload-208006_e90a541b9ba9814a80f6149e26dd1e8d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-no-preload-208006","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e90a541b9ba9814a80f6149e26dd1e8d"},"owner":"root"}]
	I1121 14:47:56.838841 2852540 cri.go:126] list returned 2 containers
	I1121 14:47:56.838850 2852540 cri.go:129] container: {ID:78fc78989bb3da213502a60531419000303d98f02aa9059a0dc4195ead8b64d1 Status:stopped}
	I1121 14:47:56.838873 2852540 cri.go:131] skipping 78fc78989bb3da213502a60531419000303d98f02aa9059a0dc4195ead8b64d1 - not in ps
	I1121 14:47:56.838878 2852540 cri.go:129] container: {ID:dbeb041c180bfd8efe1a8378342d4b1cc19fc87407e31aa23f81d737902ef8f6 Status:created}
	I1121 14:47:56.838883 2852540 cri.go:131] skipping dbeb041c180bfd8efe1a8378342d4b1cc19fc87407e31aa23f81d737902ef8f6 - not in ps
	I1121 14:47:56.838941 2852540 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:47:56.865391 2852540 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1121 14:47:56.865408 2852540 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1121 14:47:56.865466 2852540 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1121 14:47:56.888216 2852540 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1121 14:47:56.889137 2852540 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-208006" does not appear in /home/jenkins/minikube-integration/21847-2633933/kubeconfig
	I1121 14:47:56.889657 2852540 kubeconfig.go:62] /home/jenkins/minikube-integration/21847-2633933/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-208006" cluster setting kubeconfig missing "no-preload-208006" context setting]
	I1121 14:47:56.890383 2852540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/kubeconfig: {Name:mkd905aaf74d26e32c0b3e46a7edfbf13f4b98ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:47:56.894225 2852540 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1121 14:47:56.930744 2852540 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1121 14:47:56.930774 2852540 kubeadm.go:602] duration metric: took 65.354322ms to restartPrimaryControlPlane
	I1121 14:47:56.930783 2852540 kubeadm.go:403] duration metric: took 214.463799ms to StartCluster
	I1121 14:47:56.930798 2852540 settings.go:142] acquiring lock: {Name:mkd6064915932eca5a3b1d70feb4ec8240f340da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:47:56.930870 2852540 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-2633933/kubeconfig
	I1121 14:47:56.932293 2852540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/kubeconfig: {Name:mkd905aaf74d26e32c0b3e46a7edfbf13f4b98ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:47:56.932504 2852540 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:47:56.932887 2852540 config.go:182] Loaded profile config "no-preload-208006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:47:56.932927 2852540 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:47:56.932994 2852540 addons.go:70] Setting storage-provisioner=true in profile "no-preload-208006"
	I1121 14:47:56.933008 2852540 addons.go:239] Setting addon storage-provisioner=true in "no-preload-208006"
	W1121 14:47:56.933013 2852540 addons.go:248] addon storage-provisioner should already be in state true
	I1121 14:47:56.933488 2852540 host.go:66] Checking if "no-preload-208006" exists ...
	I1121 14:47:56.933677 2852540 addons.go:70] Setting default-storageclass=true in profile "no-preload-208006"
	I1121 14:47:56.933692 2852540 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-208006"
	I1121 14:47:56.933949 2852540 cli_runner.go:164] Run: docker container inspect no-preload-208006 --format={{.State.Status}}
	I1121 14:47:56.934470 2852540 cli_runner.go:164] Run: docker container inspect no-preload-208006 --format={{.State.Status}}
	I1121 14:47:56.947603 2852540 addons.go:70] Setting metrics-server=true in profile "no-preload-208006"
	I1121 14:47:56.947675 2852540 addons.go:239] Setting addon metrics-server=true in "no-preload-208006"
	W1121 14:47:56.947699 2852540 addons.go:248] addon metrics-server should already be in state true
	I1121 14:47:56.947764 2852540 host.go:66] Checking if "no-preload-208006" exists ...
	I1121 14:47:56.948255 2852540 cli_runner.go:164] Run: docker container inspect no-preload-208006 --format={{.State.Status}}
	I1121 14:47:56.964826 2852540 addons.go:70] Setting dashboard=true in profile "no-preload-208006"
	I1121 14:47:56.964859 2852540 addons.go:239] Setting addon dashboard=true in "no-preload-208006"
	W1121 14:47:56.967011 2852540 addons.go:248] addon dashboard should already be in state true
	I1121 14:47:56.967066 2852540 host.go:66] Checking if "no-preload-208006" exists ...
	I1121 14:47:56.969565 2852540 cli_runner.go:164] Run: docker container inspect no-preload-208006 --format={{.State.Status}}
	I1121 14:47:56.975001 2852540 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:47:56.977908 2852540 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:47:56.977937 2852540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:47:56.978005 2852540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:47:56.982870 2852540 out.go:179] * Verifying Kubernetes components...
	I1121 14:47:56.993147 2852540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:47:57.051038 2852540 addons.go:239] Setting addon default-storageclass=true in "no-preload-208006"
	W1121 14:47:57.051062 2852540 addons.go:248] addon default-storageclass should already be in state true
	I1121 14:47:57.051087 2852540 host.go:66] Checking if "no-preload-208006" exists ...
	I1121 14:47:57.051486 2852540 cli_runner.go:164] Run: docker container inspect no-preload-208006 --format={{.State.Status}}
	I1121 14:47:57.051687 2852540 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1121 14:47:57.057232 2852540 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1121 14:47:57.057259 2852540 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1121 14:47:57.057351 2852540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:47:57.061353 2852540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36740 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/no-preload-208006/id_rsa Username:docker}
	I1121 14:47:57.067400 2852540 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1121 14:47:57.077439 2852540 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	c1e6b3e96fde8       1611cd07b61d5       7 seconds ago        Running             busybox                   0                   a4dbf3d722bf6       busybox                                      default
	1fa3deca4f712       138784d87c9c5       12 seconds ago       Running             coredns                   0                   cd78e9aa80e31       coredns-66bc5c9577-fs65k                     kube-system
	c55d3da92c0df       ba04bb24b9575       13 seconds ago       Running             storage-provisioner       0                   e0215c9353668       storage-provisioner                          kube-system
	a5a9aa39a69c5       b1a8c6f707935       54 seconds ago       Running             kindnet-cni               0                   e09682004aff5       kindnet-7hksz                                kube-system
	9314412663f4f       05baa95f5142d       54 seconds ago       Running             kube-proxy                0                   ed42d49616af7       kube-proxy-r9v4p                             kube-system
	47316d8361377       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   ed021ff9e7e8a       kube-controller-manager-embed-certs-695324   kube-system
	8089a6de675f7       a1894772a478e       About a minute ago   Running             etcd                      0                   ed4fb53003b44       etcd-embed-certs-695324                      kube-system
	9d593bf1d15ca       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   9cd78d9766fd3       kube-apiserver-embed-certs-695324            kube-system
	3ab7a046e22a5       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   9583f713252bb       kube-scheduler-embed-certs-695324            kube-system
	
	
	==> containerd <==
	Nov 21 14:47:45 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:45.585569403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fs65k,Uid:a1f4fe7a-90b6-4b7a-8bdd-e805634b811d,Namespace:kube-system,Attempt:0,}"
	Nov 21 14:47:45 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:45.635438419Z" level=info msg="connecting to shim cd78e9aa80e31bd3094296d8c773ac858d8ca7bb1957cc480cef10de9448834b" address="unix:///run/containerd/s/422a86a19d4f1b65d6df97e09f322b6ab2f0266e979c4539a7c7f487f15e47d0" namespace=k8s.io protocol=ttrpc version=3
	Nov 21 14:47:45 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:45.693207589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fs65k,Uid:a1f4fe7a-90b6-4b7a-8bdd-e805634b811d,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd78e9aa80e31bd3094296d8c773ac858d8ca7bb1957cc480cef10de9448834b\""
	Nov 21 14:47:45 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:45.701313088Z" level=info msg="CreateContainer within sandbox \"cd78e9aa80e31bd3094296d8c773ac858d8ca7bb1957cc480cef10de9448834b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 21 14:47:45 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:45.716215117Z" level=info msg="Container 1fa3deca4f712ab99b7ba61a6a86cc5dfba490db5336b7ea06ee51c89f8a3fa2: CDI devices from CRI Config.CDIDevices: []"
	Nov 21 14:47:45 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:45.726321862Z" level=info msg="CreateContainer within sandbox \"cd78e9aa80e31bd3094296d8c773ac858d8ca7bb1957cc480cef10de9448834b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1fa3deca4f712ab99b7ba61a6a86cc5dfba490db5336b7ea06ee51c89f8a3fa2\""
	Nov 21 14:47:45 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:45.727165276Z" level=info msg="StartContainer for \"1fa3deca4f712ab99b7ba61a6a86cc5dfba490db5336b7ea06ee51c89f8a3fa2\""
	Nov 21 14:47:45 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:45.729263455Z" level=info msg="connecting to shim 1fa3deca4f712ab99b7ba61a6a86cc5dfba490db5336b7ea06ee51c89f8a3fa2" address="unix:///run/containerd/s/422a86a19d4f1b65d6df97e09f322b6ab2f0266e979c4539a7c7f487f15e47d0" protocol=ttrpc version=3
	Nov 21 14:47:45 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:45.808213646Z" level=info msg="StartContainer for \"1fa3deca4f712ab99b7ba61a6a86cc5dfba490db5336b7ea06ee51c89f8a3fa2\" returns successfully"
	Nov 21 14:47:48 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:48.838284601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:2825f6cd-a93e-4f6a-9629-98e365849793,Namespace:default,Attempt:0,}"
	Nov 21 14:47:48 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:48.909728069Z" level=info msg="connecting to shim a4dbf3d722bf60eb6930556fb80444d529fe992c62e7e70b2a0987893d78ae01" address="unix:///run/containerd/s/458723e91383e9bdc5dc0fc8854af6cfe500486d5aaaa77752a154c3b28780ce" namespace=k8s.io protocol=ttrpc version=3
	Nov 21 14:47:49 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:49.019881957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:2825f6cd-a93e-4f6a-9629-98e365849793,Namespace:default,Attempt:0,} returns sandbox id \"a4dbf3d722bf60eb6930556fb80444d529fe992c62e7e70b2a0987893d78ae01\""
	Nov 21 14:47:49 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:49.023172849Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 21 14:47:51 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:51.153824896Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:47:51 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:51.155810800Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937186"
	Nov 21 14:47:51 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:51.158272965Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:47:51 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:51.161588628Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:47:51 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:51.162202797Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.138792202s"
	Nov 21 14:47:51 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:51.162351757Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 21 14:47:51 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:51.171370656Z" level=info msg="CreateContainer within sandbox \"a4dbf3d722bf60eb6930556fb80444d529fe992c62e7e70b2a0987893d78ae01\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 21 14:47:51 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:51.185103783Z" level=info msg="Container c1e6b3e96fde8b1d1ca7d0ec7c0eaef38e40b6083887df3831e8038feefed77c: CDI devices from CRI Config.CDIDevices: []"
	Nov 21 14:47:51 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:51.200956333Z" level=info msg="CreateContainer within sandbox \"a4dbf3d722bf60eb6930556fb80444d529fe992c62e7e70b2a0987893d78ae01\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"c1e6b3e96fde8b1d1ca7d0ec7c0eaef38e40b6083887df3831e8038feefed77c\""
	Nov 21 14:47:51 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:51.203475465Z" level=info msg="StartContainer for \"c1e6b3e96fde8b1d1ca7d0ec7c0eaef38e40b6083887df3831e8038feefed77c\""
	Nov 21 14:47:51 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:51.204569973Z" level=info msg="connecting to shim c1e6b3e96fde8b1d1ca7d0ec7c0eaef38e40b6083887df3831e8038feefed77c" address="unix:///run/containerd/s/458723e91383e9bdc5dc0fc8854af6cfe500486d5aaaa77752a154c3b28780ce" protocol=ttrpc version=3
	Nov 21 14:47:51 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:51.281790809Z" level=info msg="StartContainer for \"c1e6b3e96fde8b1d1ca7d0ec7c0eaef38e40b6083887df3831e8038feefed77c\" returns successfully"
	
	
	==> coredns [1fa3deca4f712ab99b7ba61a6a86cc5dfba490db5336b7ea06ee51c89f8a3fa2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51493 - 30880 "HINFO IN 8925272948188809987.8459650712627436794. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.049608354s
	
	
	==> describe nodes <==
	Name:               embed-certs-695324
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-695324
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=embed-certs-695324
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_46_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:46:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-695324
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:47:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:47:44 +0000   Fri, 21 Nov 2025 14:46:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:47:44 +0000   Fri, 21 Nov 2025 14:46:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:47:44 +0000   Fri, 21 Nov 2025 14:46:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:47:44 +0000   Fri, 21 Nov 2025 14:47:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-695324
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                b11a29ee-cc79-4f00-a5de-9472aa7b6725
	  Boot ID:                    41b0e09d-5a9a-49c9-8980-dca608ba3fce
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-fs65k                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-embed-certs-695324                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-7hksz                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-embed-certs-695324             250m (12%)    0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-controller-manager-embed-certs-695324    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-r9v4p                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-embed-certs-695324             100m (5%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 74s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  74s (x8 over 74s)  kubelet          Node embed-certs-695324 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    74s (x8 over 74s)  kubelet          Node embed-certs-695324 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     74s (x7 over 74s)  kubelet          Node embed-certs-695324 status is now: NodeHasSufficientPID
	  Normal   Starting                 74s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  61s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node embed-certs-695324 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node embed-certs-695324 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node embed-certs-695324 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node embed-certs-695324 event: Registered Node embed-certs-695324 in Controller
	  Normal   NodeReady                14s                kubelet          Node embed-certs-695324 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 13:02] overlayfs: idmapped layers are currently not supported
	[Nov21 13:03] overlayfs: idmapped layers are currently not supported
	[Nov21 13:06] overlayfs: idmapped layers are currently not supported
	[Nov21 13:08] overlayfs: idmapped layers are currently not supported
	[Nov21 13:09] overlayfs: idmapped layers are currently not supported
	[Nov21 13:10] overlayfs: idmapped layers are currently not supported
	[ +19.808801] overlayfs: idmapped layers are currently not supported
	[Nov21 13:11] overlayfs: idmapped layers are currently not supported
	[Nov21 13:12] overlayfs: idmapped layers are currently not supported
	[Nov21 13:13] overlayfs: idmapped layers are currently not supported
	[Nov21 13:14] overlayfs: idmapped layers are currently not supported
	[Nov21 13:15] overlayfs: idmapped layers are currently not supported
	[ +16.772572] overlayfs: idmapped layers are currently not supported
	[Nov21 13:16] overlayfs: idmapped layers are currently not supported
	[Nov21 13:17] overlayfs: idmapped layers are currently not supported
	[ +27.396777] overlayfs: idmapped layers are currently not supported
	[Nov21 13:18] overlayfs: idmapped layers are currently not supported
	[ +25.430119] overlayfs: idmapped layers are currently not supported
	[Nov21 13:19] overlayfs: idmapped layers are currently not supported
	[Nov21 13:20] overlayfs: idmapped layers are currently not supported
	[Nov21 13:21] overlayfs: idmapped layers are currently not supported
	[Nov21 13:22] overlayfs: idmapped layers are currently not supported
	[Nov21 13:23] overlayfs: idmapped layers are currently not supported
	[Nov21 13:24] overlayfs: idmapped layers are currently not supported
	[Nov21 13:55] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [8089a6de675f779d3fb989cf34d1ec9a6079eb37021c60f8536d10642bd9eade] <==
	{"level":"warn","ts":"2025-11-21T14:46:51.846673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:51.883085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:51.914407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:51.924987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:51.967070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:51.987467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.022282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.054456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.122238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.143065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.166887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.193193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.225450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.245230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.278757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.303610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.333910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.355045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.379822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.423334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.484845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.528051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.570267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.602178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.725173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39822","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:47:58 up 19:30,  0 user,  load average: 4.51, 3.56, 2.97
	Linux embed-certs-695324 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a5a9aa39a69c531337c2df1b274bb0a10160a7c35003839895783b3de7fbf962] <==
	I1121 14:47:04.622383       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:47:04.622660       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1121 14:47:04.622783       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:47:04.622795       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:47:04.622808       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:47:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:47:04.824310       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:47:04.824327       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:47:04.824336       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:47:04.824644       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1121 14:47:34.829354       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1121 14:47:34.829504       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1121 14:47:34.829597       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1121 14:47:34.829681       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1121 14:47:36.424963       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:47:36.425000       1 metrics.go:72] Registering metrics
	I1121 14:47:36.425092       1 controller.go:711] "Syncing nftables rules"
	I1121 14:47:44.829617       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1121 14:47:44.829675       1 main.go:301] handling current node
	I1121 14:47:54.825122       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1121 14:47:54.825288       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9d593bf1d15ca44ecf7c0dfbba9a918a766622933ff8bbdee876cc68aea573f9] <==
	I1121 14:46:53.908650       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1121 14:46:53.913098       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:46:53.915636       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1121 14:46:53.920943       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 14:46:53.923599       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:46:53.923641       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 14:46:54.085942       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:46:54.571719       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1121 14:46:54.591302       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1121 14:46:54.591325       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:46:55.824240       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:46:55.875106       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:46:55.971922       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1121 14:46:55.981167       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1121 14:46:55.982658       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 14:46:55.990959       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:46:56.842003       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:46:56.872118       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 14:46:56.969794       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1121 14:46:57.034350       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1121 14:47:02.162738       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:47:02.169366       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:47:02.660134       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1121 14:47:02.960766       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1121 14:47:56.732181       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:46244: use of closed network connection
	
	
	==> kube-controller-manager [47316d836137789125d57ba9c739c2e03666cfd1e711824a4e9100be521f1a8c] <==
	I1121 14:47:01.846712       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-695324"
	I1121 14:47:01.846861       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1121 14:47:01.849468       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1121 14:47:01.861836       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1121 14:47:01.866355       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1121 14:47:01.866467       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1121 14:47:01.866550       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1121 14:47:01.868251       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 14:47:01.868521       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1121 14:47:01.868569       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1121 14:47:01.868626       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1121 14:47:01.868744       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1121 14:47:01.871795       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1121 14:47:01.871990       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1121 14:47:01.872712       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:47:01.873418       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1121 14:47:01.873526       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1121 14:47:01.873540       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1121 14:47:01.873548       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1121 14:47:01.880002       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1121 14:47:01.891842       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1121 14:47:01.903582       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:47:01.903724       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1121 14:47:01.903789       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1121 14:47:46.852997       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [9314412663f4fd283ab31086847bd67f2e7c6d2447448091c9c117bf267f7ca1] <==
	I1121 14:47:04.399232       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:47:04.562678       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:47:04.664397       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:47:04.664435       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1121 14:47:04.664511       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:47:04.775740       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:47:04.775795       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:47:04.789783       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:47:04.790086       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:47:04.790110       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:47:04.795407       1 config.go:200] "Starting service config controller"
	I1121 14:47:04.795426       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:47:04.795448       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:47:04.795452       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:47:04.795464       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:47:04.795468       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:47:04.796083       1 config.go:309] "Starting node config controller"
	I1121 14:47:04.796090       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:47:04.796096       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:47:04.897817       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 14:47:04.897826       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 14:47:04.897858       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3ab7a046e22a5863f4b346224e2f97c150b588c9db1300593a985d262da67008] <==
	E1121 14:46:53.856032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 14:46:53.861230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 14:46:53.861531       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 14:46:53.861678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 14:46:53.871029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 14:46:54.659892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 14:46:54.690709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 14:46:54.746032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 14:46:54.754050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 14:46:54.781352       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 14:46:54.862489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 14:46:54.872673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 14:46:54.874528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 14:46:54.893411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 14:46:54.983104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 14:46:54.989643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 14:46:55.047213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 14:46:55.080021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 14:46:55.169612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 14:46:55.187169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:46:55.201968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 14:46:55.205562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 14:46:55.237978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1121 14:46:55.290233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1121 14:46:57.518380       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:46:57 embed-certs-695324 kubelet[1475]: I1121 14:46:57.989192    1475 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 21 14:46:58 embed-certs-695324 kubelet[1475]: I1121 14:46:58.136219    1475 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-695324" podStartSLOduration=1.1361956229999999 podStartE2EDuration="1.136195623s" podCreationTimestamp="2025-11-21 14:46:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:46:58.126492132 +0000 UTC m=+1.377369443" watchObservedRunningTime="2025-11-21 14:46:58.136195623 +0000 UTC m=+1.387072918"
	Nov 21 14:47:01 embed-certs-695324 kubelet[1475]: I1121 14:47:01.863023    1475 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 21 14:47:01 embed-certs-695324 kubelet[1475]: I1121 14:47:01.865290    1475 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 21 14:47:02 embed-certs-695324 kubelet[1475]: I1121 14:47:02.904457    1475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0cf5293a-46ff-4f69-ba6e-3e4d9bb7e8c1-cni-cfg\") pod \"kindnet-7hksz\" (UID: \"0cf5293a-46ff-4f69-ba6e-3e4d9bb7e8c1\") " pod="kube-system/kindnet-7hksz"
	Nov 21 14:47:02 embed-certs-695324 kubelet[1475]: I1121 14:47:02.904505    1475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0cf5293a-46ff-4f69-ba6e-3e4d9bb7e8c1-xtables-lock\") pod \"kindnet-7hksz\" (UID: \"0cf5293a-46ff-4f69-ba6e-3e4d9bb7e8c1\") " pod="kube-system/kindnet-7hksz"
	Nov 21 14:47:02 embed-certs-695324 kubelet[1475]: I1121 14:47:02.904532    1475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc8f8403-f2a8-4b2a-9e2e-1be4b561b237-xtables-lock\") pod \"kube-proxy-r9v4p\" (UID: \"dc8f8403-f2a8-4b2a-9e2e-1be4b561b237\") " pod="kube-system/kube-proxy-r9v4p"
	Nov 21 14:47:02 embed-certs-695324 kubelet[1475]: I1121 14:47:02.904549    1475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc8f8403-f2a8-4b2a-9e2e-1be4b561b237-lib-modules\") pod \"kube-proxy-r9v4p\" (UID: \"dc8f8403-f2a8-4b2a-9e2e-1be4b561b237\") " pod="kube-system/kube-proxy-r9v4p"
	Nov 21 14:47:02 embed-certs-695324 kubelet[1475]: I1121 14:47:02.904566    1475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnsl5\" (UniqueName: \"kubernetes.io/projected/dc8f8403-f2a8-4b2a-9e2e-1be4b561b237-kube-api-access-wnsl5\") pod \"kube-proxy-r9v4p\" (UID: \"dc8f8403-f2a8-4b2a-9e2e-1be4b561b237\") " pod="kube-system/kube-proxy-r9v4p"
	Nov 21 14:47:02 embed-certs-695324 kubelet[1475]: I1121 14:47:02.904586    1475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0cf5293a-46ff-4f69-ba6e-3e4d9bb7e8c1-lib-modules\") pod \"kindnet-7hksz\" (UID: \"0cf5293a-46ff-4f69-ba6e-3e4d9bb7e8c1\") " pod="kube-system/kindnet-7hksz"
	Nov 21 14:47:02 embed-certs-695324 kubelet[1475]: I1121 14:47:02.904607    1475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6ftp\" (UniqueName: \"kubernetes.io/projected/0cf5293a-46ff-4f69-ba6e-3e4d9bb7e8c1-kube-api-access-d6ftp\") pod \"kindnet-7hksz\" (UID: \"0cf5293a-46ff-4f69-ba6e-3e4d9bb7e8c1\") " pod="kube-system/kindnet-7hksz"
	Nov 21 14:47:02 embed-certs-695324 kubelet[1475]: I1121 14:47:02.904643    1475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dc8f8403-f2a8-4b2a-9e2e-1be4b561b237-kube-proxy\") pod \"kube-proxy-r9v4p\" (UID: \"dc8f8403-f2a8-4b2a-9e2e-1be4b561b237\") " pod="kube-system/kube-proxy-r9v4p"
	Nov 21 14:47:03 embed-certs-695324 kubelet[1475]: I1121 14:47:03.097764    1475 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 21 14:47:05 embed-certs-695324 kubelet[1475]: I1121 14:47:05.319323    1475 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r9v4p" podStartSLOduration=3.319302056 podStartE2EDuration="3.319302056s" podCreationTimestamp="2025-11-21 14:47:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:47:05.315091042 +0000 UTC m=+8.565968345" watchObservedRunningTime="2025-11-21 14:47:05.319302056 +0000 UTC m=+8.570179350"
	Nov 21 14:47:06 embed-certs-695324 kubelet[1475]: I1121 14:47:06.155467    1475 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-7hksz" podStartSLOduration=4.155444934 podStartE2EDuration="4.155444934s" podCreationTimestamp="2025-11-21 14:47:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:47:05.41347933 +0000 UTC m=+8.664356633" watchObservedRunningTime="2025-11-21 14:47:06.155444934 +0000 UTC m=+9.406322237"
	Nov 21 14:47:44 embed-certs-695324 kubelet[1475]: I1121 14:47:44.917850    1475 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 21 14:47:45 embed-certs-695324 kubelet[1475]: I1121 14:47:45.060892    1475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqcrg\" (UniqueName: \"kubernetes.io/projected/d5cc5588-78f5-4ba3-8929-ce406ad776cc-kube-api-access-xqcrg\") pod \"storage-provisioner\" (UID: \"d5cc5588-78f5-4ba3-8929-ce406ad776cc\") " pod="kube-system/storage-provisioner"
	Nov 21 14:47:45 embed-certs-695324 kubelet[1475]: I1121 14:47:45.061217    1475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d5cc5588-78f5-4ba3-8929-ce406ad776cc-tmp\") pod \"storage-provisioner\" (UID: \"d5cc5588-78f5-4ba3-8929-ce406ad776cc\") " pod="kube-system/storage-provisioner"
	Nov 21 14:47:45 embed-certs-695324 kubelet[1475]: I1121 14:47:45.161747    1475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1f4fe7a-90b6-4b7a-8bdd-e805634b811d-config-volume\") pod \"coredns-66bc5c9577-fs65k\" (UID: \"a1f4fe7a-90b6-4b7a-8bdd-e805634b811d\") " pod="kube-system/coredns-66bc5c9577-fs65k"
	Nov 21 14:47:45 embed-certs-695324 kubelet[1475]: I1121 14:47:45.162108    1475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66f6j\" (UniqueName: \"kubernetes.io/projected/a1f4fe7a-90b6-4b7a-8bdd-e805634b811d-kube-api-access-66f6j\") pod \"coredns-66bc5c9577-fs65k\" (UID: \"a1f4fe7a-90b6-4b7a-8bdd-e805634b811d\") " pod="kube-system/coredns-66bc5c9577-fs65k"
	Nov 21 14:47:46 embed-certs-695324 kubelet[1475]: I1121 14:47:46.417213    1475 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.41719619 podStartE2EDuration="41.41719619s" podCreationTimestamp="2025-11-21 14:47:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:47:46.416975183 +0000 UTC m=+49.667852486" watchObservedRunningTime="2025-11-21 14:47:46.41719619 +0000 UTC m=+49.668073493"
	Nov 21 14:47:48 embed-certs-695324 kubelet[1475]: I1121 14:47:48.505568    1475 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-fs65k" podStartSLOduration=45.505542724 podStartE2EDuration="45.505542724s" podCreationTimestamp="2025-11-21 14:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:47:46.433712318 +0000 UTC m=+49.684589621" watchObservedRunningTime="2025-11-21 14:47:48.505542724 +0000 UTC m=+51.756420043"
	Nov 21 14:47:48 embed-certs-695324 kubelet[1475]: I1121 14:47:48.689851    1475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97n7t\" (UniqueName: \"kubernetes.io/projected/2825f6cd-a93e-4f6a-9629-98e365849793-kube-api-access-97n7t\") pod \"busybox\" (UID: \"2825f6cd-a93e-4f6a-9629-98e365849793\") " pod="default/busybox"
	Nov 21 14:47:56 embed-certs-695324 kubelet[1475]: E1121 14:47:56.732777    1475 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 192.168.76.2:59294->192.168.76.2:10010: write tcp 192.168.76.2:10250->192.168.76.2:38716: write: connection reset by peer
	Nov 21 14:47:56 embed-certs-695324 kubelet[1475]: E1121 14:47:56.733179    1475 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.76.2:59294->192.168.76.2:10010: write tcp 192.168.76.2:59294->192.168.76.2:10010: write: broken pipe
	
	
	==> storage-provisioner [c55d3da92c0df8483626a4c994c86da24be9e5fcfcd848573b5ecc5ef7788bc7] <==
	I1121 14:47:45.478154       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1121 14:47:45.490457       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 14:47:45.490746       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1121 14:47:45.493701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:45.499823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:47:45.500170       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 14:47:45.500454       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-695324_3b3a2498-460f-4504-9aed-c216de4806f5!
	I1121 14:47:45.501589       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"95ae73aa-dde4-4062-8331-e524dfe4331a", APIVersion:"v1", ResourceVersion:"460", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-695324_3b3a2498-460f-4504-9aed-c216de4806f5 became leader
	W1121 14:47:45.510041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:45.513534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:47:45.601588       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-695324_3b3a2498-460f-4504-9aed-c216de4806f5!
	W1121 14:47:47.516976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:47.521578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:49.525561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:49.536122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:51.539371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:51.544286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:53.550082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:53.559068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:55.563294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:55.569943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:57.574883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:57.595136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-695324 -n embed-certs-695324
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-695324 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-695324
helpers_test.go:243: (dbg) docker inspect embed-certs-695324:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d08adc104671e5a2326f3660dea61bbe233168f252b0159466f44e4544784af",
	        "Created": "2025-11-21T14:46:23.190114002Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2846872,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:46:23.272787248Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/8d08adc104671e5a2326f3660dea61bbe233168f252b0159466f44e4544784af/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d08adc104671e5a2326f3660dea61bbe233168f252b0159466f44e4544784af/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d08adc104671e5a2326f3660dea61bbe233168f252b0159466f44e4544784af/hosts",
	        "LogPath": "/var/lib/docker/containers/8d08adc104671e5a2326f3660dea61bbe233168f252b0159466f44e4544784af/8d08adc104671e5a2326f3660dea61bbe233168f252b0159466f44e4544784af-json.log",
	        "Name": "/embed-certs-695324",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-695324:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-695324",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d08adc104671e5a2326f3660dea61bbe233168f252b0159466f44e4544784af",
	                "LowerDir": "/var/lib/docker/overlay2/1e699f13d458d470622888b2cd3160c1356d7e28175cc94e8e7d65a75291934f-init/diff:/var/lib/docker/overlay2/789a4b9f9866e585907664b1eaf98d94438dbf699e0511f3ca5ba5ea682b005e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1e699f13d458d470622888b2cd3160c1356d7e28175cc94e8e7d65a75291934f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1e699f13d458d470622888b2cd3160c1356d7e28175cc94e8e7d65a75291934f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1e699f13d458d470622888b2cd3160c1356d7e28175cc94e8e7d65a75291934f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-695324",
	                "Source": "/var/lib/docker/volumes/embed-certs-695324/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-695324",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-695324",
	                "name.minikube.sigs.k8s.io": "embed-certs-695324",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7cfd48fa99a33d42f9354ec0af6e2d47c0e1c8f4132db299bbf711b63d443106",
	            "SandboxKey": "/var/run/docker/netns/7cfd48fa99a3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36735"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36736"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36739"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36737"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36738"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-695324": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:9b:4d:34:49:ea",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "afe33961296510bf0efcd9290091d3839ca2ca5115c07f5d48c5a394b64c12aa",
	                    "EndpointID": "091cecd5e96d4f9b023165e455d4da6687a97f98f9ec55423bcdf33f12d68c29",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-695324",
	                        "8d08adc10467"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-695324 -n embed-certs-695324
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-695324 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-695324 logs -n 25: (1.956859376s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ start   │ -p cert-expiration-184410 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-184410   │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ force-systemd-env-041746 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-041746 │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ delete  │ -p force-systemd-env-041746                                                                                                                                                                                                                         │ force-systemd-env-041746 │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:43 UTC │
	│ start   │ -p cert-options-035007 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-035007      │ jenkins │ v1.37.0 │ 21 Nov 25 14:43 UTC │ 21 Nov 25 14:43 UTC │
	│ ssh     │ cert-options-035007 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-035007      │ jenkins │ v1.37.0 │ 21 Nov 25 14:43 UTC │ 21 Nov 25 14:43 UTC │
	│ ssh     │ -p cert-options-035007 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-035007      │ jenkins │ v1.37.0 │ 21 Nov 25 14:43 UTC │ 21 Nov 25 14:43 UTC │
	│ delete  │ -p cert-options-035007                                                                                                                                                                                                                              │ cert-options-035007      │ jenkins │ v1.37.0 │ 21 Nov 25 14:43 UTC │ 21 Nov 25 14:43 UTC │
	│ start   │ -p old-k8s-version-092258 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:43 UTC │ 21 Nov 25 14:44 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-092258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:44 UTC │ 21 Nov 25 14:44 UTC │
	│ stop    │ -p old-k8s-version-092258 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:44 UTC │ 21 Nov 25 14:45 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-092258 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:45 UTC │ 21 Nov 25 14:45 UTC │
	│ start   │ -p old-k8s-version-092258 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:45 UTC │ 21 Nov 25 14:45 UTC │
	│ start   │ -p cert-expiration-184410 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-184410   │ jenkins │ v1.37.0 │ 21 Nov 25 14:45 UTC │ 21 Nov 25 14:46 UTC │
	│ delete  │ -p cert-expiration-184410                                                                                                                                                                                                                           │ cert-expiration-184410   │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │ 21 Nov 25 14:46 UTC │
	│ image   │ old-k8s-version-092258 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │ 21 Nov 25 14:46 UTC │
	│ pause   │ -p old-k8s-version-092258 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │ 21 Nov 25 14:46 UTC │
	│ start   │ -p no-preload-208006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-208006        │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │ 21 Nov 25 14:47 UTC │
	│ unpause │ -p old-k8s-version-092258 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │ 21 Nov 25 14:46 UTC │
	│ delete  │ -p old-k8s-version-092258                                                                                                                                                                                                                           │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │ 21 Nov 25 14:46 UTC │
	│ delete  │ -p old-k8s-version-092258                                                                                                                                                                                                                           │ old-k8s-version-092258   │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │ 21 Nov 25 14:46 UTC │
	│ start   │ -p embed-certs-695324 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-695324       │ jenkins │ v1.37.0 │ 21 Nov 25 14:46 UTC │ 21 Nov 25 14:47 UTC │
	│ addons  │ enable metrics-server -p no-preload-208006 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-208006        │ jenkins │ v1.37.0 │ 21 Nov 25 14:47 UTC │ 21 Nov 25 14:47 UTC │
	│ stop    │ -p no-preload-208006 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-208006        │ jenkins │ v1.37.0 │ 21 Nov 25 14:47 UTC │ 21 Nov 25 14:47 UTC │
	│ addons  │ enable dashboard -p no-preload-208006 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-208006        │ jenkins │ v1.37.0 │ 21 Nov 25 14:47 UTC │ 21 Nov 25 14:47 UTC │
	│ start   │ -p no-preload-208006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-208006        │ jenkins │ v1.37.0 │ 21 Nov 25 14:47 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:47:48
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:47:48.921466 2852540 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:47:48.921704 2852540 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:47:48.921711 2852540 out.go:374] Setting ErrFile to fd 2...
	I1121 14:47:48.921716 2852540 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:47:48.921981 2852540 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-2633933/.minikube/bin
	I1121 14:47:48.922363 2852540 out.go:368] Setting JSON to false
	I1121 14:47:48.923413 2852540 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":70217,"bootTime":1763666252,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1121 14:47:48.923475 2852540 start.go:143] virtualization:  
	I1121 14:47:48.927756 2852540 out.go:179] * [no-preload-208006] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 14:47:48.931082 2852540 notify.go:221] Checking for updates...
	I1121 14:47:48.931764 2852540 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:47:48.935178 2852540 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:47:48.938992 2852540 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-2633933/kubeconfig
	I1121 14:47:48.941925 2852540 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-2633933/.minikube
	I1121 14:47:48.945494 2852540 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 14:47:48.948614 2852540 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:47:48.952503 2852540 config.go:182] Loaded profile config "no-preload-208006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:47:48.953355 2852540 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:47:49.012984 2852540 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 14:47:49.013162 2852540 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:47:49.091388 2852540 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-21 14:47:49.071805683 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:47:49.091493 2852540 docker.go:319] overlay module found
	I1121 14:47:49.094779 2852540 out.go:179] * Using the docker driver based on existing profile
	I1121 14:47:49.097649 2852540 start.go:309] selected driver: docker
	I1121 14:47:49.097673 2852540 start.go:930] validating driver "docker" against &{Name:no-preload-208006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-208006 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:47:49.097899 2852540 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:47:49.098849 2852540 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:47:49.160948 2852540 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-21 14:47:49.151076381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:47:49.161297 2852540 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:47:49.161336 2852540 cni.go:84] Creating CNI manager for ""
	I1121 14:47:49.161392 2852540 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:47:49.161437 2852540 start.go:353] cluster config:
	{Name:no-preload-208006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-208006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:47:49.164612 2852540 out.go:179] * Starting "no-preload-208006" primary control-plane node in "no-preload-208006" cluster
	I1121 14:47:49.167640 2852540 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1121 14:47:49.170475 2852540 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:47:49.173451 2852540 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:47:49.173487 2852540 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:47:49.173596 2852540 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/config.json ...
	I1121 14:47:49.173923 2852540 cache.go:107] acquiring lock: {Name:mkcf17144ebb9e4cf17530599113a88357efaad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:47:49.173998 2852540 cache.go:115] /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1121 14:47:49.174006 2852540 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 101.364µs
	I1121 14:47:49.174014 2852540 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1121 14:47:49.174024 2852540 cache.go:107] acquiring lock: {Name:mk2c8a0b13865bbc2485475059a5351d81bfa5fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:47:49.174053 2852540 cache.go:115] /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1121 14:47:49.174058 2852540 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 35.437µs
	I1121 14:47:49.174064 2852540 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1121 14:47:49.174073 2852540 cache.go:107] acquiring lock: {Name:mk6f5cd4ae8f112091906498a9fbf13737fb7e1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:47:49.174100 2852540 cache.go:115] /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1121 14:47:49.174104 2852540 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 32.942µs
	I1121 14:47:49.174110 2852540 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1121 14:47:49.174118 2852540 cache.go:107] acquiring lock: {Name:mka7ae7f4e3f1ac28c23fec2ca16322477d17392 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:47:49.174151 2852540 cache.go:115] /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1121 14:47:49.174155 2852540 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 38.292µs
	I1121 14:47:49.174161 2852540 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1121 14:47:49.174169 2852540 cache.go:107] acquiring lock: {Name:mk650c489015758ee052bc4f95260ad256ed811e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:47:49.174201 2852540 cache.go:115] /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1121 14:47:49.174206 2852540 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 37.817µs
	I1121 14:47:49.174212 2852540 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1121 14:47:49.174225 2852540 cache.go:107] acquiring lock: {Name:mk491b2c8c5942ed4dbe7004b6031cb6a3cfabea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:47:49.174255 2852540 cache.go:115] /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1121 14:47:49.174259 2852540 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 39.408µs
	I1121 14:47:49.174264 2852540 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1121 14:47:49.174272 2852540 cache.go:107] acquiring lock: {Name:mk63ee20fc68a0d9784f692623ea82f65e136c59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:47:49.174297 2852540 cache.go:115] /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1121 14:47:49.174301 2852540 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 30.12µs
	I1121 14:47:49.174306 2852540 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1121 14:47:49.174314 2852540 cache.go:107] acquiring lock: {Name:mk59bedcab9fe96ad70c9894cfda067036228dad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:47:49.174339 2852540 cache.go:115] /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1121 14:47:49.174344 2852540 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 31.351µs
	I1121 14:47:49.174350 2852540 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1121 14:47:49.174355 2852540 cache.go:87] Successfully saved all images to host disk.
	I1121 14:47:49.192793 2852540 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:47:49.192817 2852540 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:47:49.192836 2852540 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:47:49.192860 2852540 start.go:360] acquireMachinesLock for no-preload-208006: {Name:mkbee8c16de6300bba99d6e61014d756b275729d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:47:49.192922 2852540 start.go:364] duration metric: took 42.223µs to acquireMachinesLock for "no-preload-208006"
	I1121 14:47:49.192944 2852540 start.go:96] Skipping create...Using existing machine configuration
	I1121 14:47:49.192950 2852540 fix.go:54] fixHost starting: 
	I1121 14:47:49.193250 2852540 cli_runner.go:164] Run: docker container inspect no-preload-208006 --format={{.State.Status}}
	I1121 14:47:49.210897 2852540 fix.go:112] recreateIfNeeded on no-preload-208006: state=Stopped err=<nil>
	W1121 14:47:49.210930 2852540 fix.go:138] unexpected machine state, will restart: <nil>
	I1121 14:47:49.214285 2852540 out.go:252] * Restarting existing docker container for "no-preload-208006" ...
	I1121 14:47:49.214371 2852540 cli_runner.go:164] Run: docker start no-preload-208006
	I1121 14:47:49.473472 2852540 cli_runner.go:164] Run: docker container inspect no-preload-208006 --format={{.State.Status}}
	I1121 14:47:49.497816 2852540 kic.go:430] container "no-preload-208006" state is running.
	I1121 14:47:49.500483 2852540 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-208006
	I1121 14:47:49.526844 2852540 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/config.json ...
	I1121 14:47:49.527093 2852540 machine.go:94] provisionDockerMachine start ...
	I1121 14:47:49.527155 2852540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:47:49.549340 2852540 main.go:143] libmachine: Using SSH client type: native
	I1121 14:47:49.549680 2852540 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 36740 <nil> <nil>}
	I1121 14:47:49.549689 2852540 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:47:49.551077 2852540 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1121 14:47:52.696828 2852540 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-208006
	
	I1121 14:47:52.696852 2852540 ubuntu.go:182] provisioning hostname "no-preload-208006"
	I1121 14:47:52.696940 2852540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:47:52.716119 2852540 main.go:143] libmachine: Using SSH client type: native
	I1121 14:47:52.716430 2852540 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 36740 <nil> <nil>}
	I1121 14:47:52.716447 2852540 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-208006 && echo "no-preload-208006" | sudo tee /etc/hostname
	I1121 14:47:52.871469 2852540 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-208006
	
	I1121 14:47:52.871556 2852540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:47:52.889686 2852540 main.go:143] libmachine: Using SSH client type: native
	I1121 14:47:52.890002 2852540 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 36740 <nil> <nil>}
	I1121 14:47:52.890025 2852540 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-208006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-208006/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-208006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:47:53.029653 2852540 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:47:53.029681 2852540 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-2633933/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-2633933/.minikube}
	I1121 14:47:53.029712 2852540 ubuntu.go:190] setting up certificates
	I1121 14:47:53.029722 2852540 provision.go:84] configureAuth start
	I1121 14:47:53.029785 2852540 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-208006
	I1121 14:47:53.047711 2852540 provision.go:143] copyHostCerts
	I1121 14:47:53.047780 2852540 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-2633933/.minikube/cert.pem, removing ...
	I1121 14:47:53.047798 2852540 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-2633933/.minikube/cert.pem
	I1121 14:47:53.047878 2852540 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-2633933/.minikube/cert.pem (1123 bytes)
	I1121 14:47:53.047979 2852540 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-2633933/.minikube/key.pem, removing ...
	I1121 14:47:53.047985 2852540 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-2633933/.minikube/key.pem
	I1121 14:47:53.048011 2852540 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-2633933/.minikube/key.pem (1679 bytes)
	I1121 14:47:53.048102 2852540 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.pem, removing ...
	I1121 14:47:53.048112 2852540 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.pem
	I1121 14:47:53.048137 2852540 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.pem (1082 bytes)
	I1121 14:47:53.048189 2852540 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca-key.pem org=jenkins.no-preload-208006 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-208006]
	I1121 14:47:53.754597 2852540 provision.go:177] copyRemoteCerts
	I1121 14:47:53.754675 2852540 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:47:53.754722 2852540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:47:53.774910 2852540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36740 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/no-preload-208006/id_rsa Username:docker}
	I1121 14:47:53.876839 2852540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1121 14:47:53.895889 2852540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1121 14:47:53.913553 2852540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 14:47:53.932064 2852540 provision.go:87] duration metric: took 902.327151ms to configureAuth
	I1121 14:47:53.932130 2852540 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:47:53.932362 2852540 config.go:182] Loaded profile config "no-preload-208006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:47:53.932376 2852540 machine.go:97] duration metric: took 4.405275171s to provisionDockerMachine
	I1121 14:47:53.932385 2852540 start.go:293] postStartSetup for "no-preload-208006" (driver="docker")
	I1121 14:47:53.932394 2852540 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:47:53.932450 2852540 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:47:53.932505 2852540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:47:53.950080 2852540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36740 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/no-preload-208006/id_rsa Username:docker}
	I1121 14:47:54.049389 2852540 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:47:54.052853 2852540 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:47:54.052885 2852540 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:47:54.052897 2852540 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-2633933/.minikube/addons for local assets ...
	I1121 14:47:54.052954 2852540 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-2633933/.minikube/files for local assets ...
	I1121 14:47:54.053077 2852540 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem -> 26357852.pem in /etc/ssl/certs
	I1121 14:47:54.053215 2852540 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:47:54.060803 2852540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem --> /etc/ssl/certs/26357852.pem (1708 bytes)
	I1121 14:47:54.079615 2852540 start.go:296] duration metric: took 147.212809ms for postStartSetup
	I1121 14:47:54.079696 2852540 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:47:54.079736 2852540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:47:54.104073 2852540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36740 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/no-preload-208006/id_rsa Username:docker}
	I1121 14:47:54.202133 2852540 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:47:54.206805 2852540 fix.go:56] duration metric: took 5.013847847s for fixHost
	I1121 14:47:54.206860 2852540 start.go:83] releasing machines lock for "no-preload-208006", held for 5.013897863s
	I1121 14:47:54.206929 2852540 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-208006
	I1121 14:47:54.223643 2852540 ssh_runner.go:195] Run: cat /version.json
	I1121 14:47:54.223647 2852540 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:47:54.223712 2852540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:47:54.223755 2852540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:47:54.250870 2852540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36740 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/no-preload-208006/id_rsa Username:docker}
	I1121 14:47:54.252248 2852540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36740 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/no-preload-208006/id_rsa Username:docker}
	I1121 14:47:54.353921 2852540 ssh_runner.go:195] Run: systemctl --version
	I1121 14:47:54.470187 2852540 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:47:54.475176 2852540 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:47:54.475286 2852540 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:47:54.485674 2852540 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1121 14:47:54.485699 2852540 start.go:496] detecting cgroup driver to use...
	I1121 14:47:54.485733 2852540 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 14:47:54.485788 2852540 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1121 14:47:54.503580 2852540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1121 14:47:54.518225 2852540 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:47:54.518301 2852540 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:47:54.535071 2852540 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:47:54.549959 2852540 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:47:54.673655 2852540 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:47:54.793907 2852540 docker.go:234] disabling docker service ...
	I1121 14:47:54.794034 2852540 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:47:54.810997 2852540 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:47:54.824239 2852540 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:47:54.935125 2852540 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:47:55.062499 2852540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:47:55.075779 2852540 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:47:55.096949 2852540 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1121 14:47:55.108652 2852540 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1121 14:47:55.118394 2852540 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1121 14:47:55.118539 2852540 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1121 14:47:55.128557 2852540 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:47:55.140188 2852540 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1121 14:47:55.149880 2852540 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:47:55.159944 2852540 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:47:55.169676 2852540 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1121 14:47:55.179345 2852540 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1121 14:47:55.189541 2852540 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1121 14:47:55.199278 2852540 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:47:55.208191 2852540 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:47:55.216097 2852540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:47:55.345014 2852540 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1121 14:47:55.503356 2852540 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1121 14:47:55.503477 2852540 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1121 14:47:55.507454 2852540 start.go:564] Will wait 60s for crictl version
	I1121 14:47:55.507569 2852540 ssh_runner.go:195] Run: which crictl
	I1121 14:47:55.514434 2852540 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:47:55.539911 2852540 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1121 14:47:55.540031 2852540 ssh_runner.go:195] Run: containerd --version
	I1121 14:47:55.571210 2852540 ssh_runner.go:195] Run: containerd --version
	I1121 14:47:55.602049 2852540 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1121 14:47:55.605190 2852540 cli_runner.go:164] Run: docker network inspect no-preload-208006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:47:55.622969 2852540 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1121 14:47:55.626857 2852540 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:47:55.636527 2852540 kubeadm.go:884] updating cluster {Name:no-preload-208006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-208006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:47:55.636656 2852540 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:47:55.636710 2852540 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:47:55.662144 2852540 containerd.go:627] all images are preloaded for containerd runtime.
	I1121 14:47:55.662170 2852540 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:47:55.662178 2852540 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1121 14:47:55.662278 2852540 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-208006 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-208006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:47:55.662346 2852540 ssh_runner.go:195] Run: sudo crictl info
	I1121 14:47:55.690138 2852540 cni.go:84] Creating CNI manager for ""
	I1121 14:47:55.690159 2852540 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:47:55.690177 2852540 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:47:55.690199 2852540 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-208006 NodeName:no-preload-208006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:47:55.690326 2852540 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-208006"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:47:55.690401 2852540 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:47:55.699042 2852540 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:47:55.699125 2852540 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:47:55.706607 2852540 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1121 14:47:55.719999 2852540 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:47:55.732470 2852540 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1121 14:47:55.745991 2852540 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:47:55.749657 2852540 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:47:55.759125 2852540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:47:55.872400 2852540 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:47:55.888513 2852540 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006 for IP: 192.168.85.2
	I1121 14:47:55.888532 2852540 certs.go:195] generating shared ca certs ...
	I1121 14:47:55.888548 2852540 certs.go:227] acquiring lock for ca certs: {Name:mk0a1b8efa9f1d453751b4f7afafeea16d7243a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:47:55.888693 2852540 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.key
	I1121 14:47:55.888731 2852540 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/proxy-client-ca.key
	I1121 14:47:55.888739 2852540 certs.go:257] generating profile certs ...
	I1121 14:47:55.888830 2852540 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.key
	I1121 14:47:55.888890 2852540 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/apiserver.key.78bb1819
	I1121 14:47:55.888933 2852540 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/proxy-client.key
	I1121 14:47:55.889073 2852540 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/2635785.pem (1338 bytes)
	W1121 14:47:55.889104 2852540 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/2635785_empty.pem, impossibly tiny 0 bytes
	I1121 14:47:55.889113 2852540 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:47:55.889140 2852540 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem (1082 bytes)
	I1121 14:47:55.889168 2852540 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:47:55.889190 2852540 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/key.pem (1679 bytes)
	I1121 14:47:55.889233 2852540 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem (1708 bytes)
	I1121 14:47:55.889868 2852540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:47:55.909424 2852540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 14:47:55.927725 2852540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:47:55.945553 2852540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 14:47:55.963355 2852540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1121 14:47:55.996425 2852540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:47:56.023401 2852540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:47:56.062566 2852540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:47:56.101527 2852540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem --> /usr/share/ca-certificates/26357852.pem (1708 bytes)
	I1121 14:47:56.124150 2852540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:47:56.143823 2852540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/2635785.pem --> /usr/share/ca-certificates/2635785.pem (1338 bytes)
	I1121 14:47:56.164251 2852540 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:47:56.177953 2852540 ssh_runner.go:195] Run: openssl version
	I1121 14:47:56.186725 2852540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:47:56.198025 2852540 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:47:56.202911 2852540 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:57 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:47:56.202976 2852540 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:47:56.255504 2852540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:47:56.264092 2852540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2635785.pem && ln -fs /usr/share/ca-certificates/2635785.pem /etc/ssl/certs/2635785.pem"
	I1121 14:47:56.272216 2852540 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2635785.pem
	I1121 14:47:56.275921 2852540 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:03 /usr/share/ca-certificates/2635785.pem
	I1121 14:47:56.275988 2852540 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2635785.pem
	I1121 14:47:56.316927 2852540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2635785.pem /etc/ssl/certs/51391683.0"
	I1121 14:47:56.324682 2852540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26357852.pem && ln -fs /usr/share/ca-certificates/26357852.pem /etc/ssl/certs/26357852.pem"
	I1121 14:47:56.332971 2852540 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26357852.pem
	I1121 14:47:56.336786 2852540 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:03 /usr/share/ca-certificates/26357852.pem
	I1121 14:47:56.336902 2852540 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26357852.pem
	I1121 14:47:56.377763 2852540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/26357852.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:47:56.386236 2852540 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:47:56.390166 2852540 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1121 14:47:56.431383 2852540 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1121 14:47:56.474709 2852540 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1121 14:47:56.519652 2852540 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1121 14:47:56.572147 2852540 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1121 14:47:56.623334 2852540 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1121 14:47:56.716330 2852540 kubeadm.go:401] StartCluster: {Name:no-preload-208006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-208006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:47:56.716421 2852540 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1121 14:47:56.716493 2852540 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:47:56.792581 2852540 cri.go:89] found id: "9a1d80f65a499b5e66dbd87f28482a42a0845efa7b882a7e675d26b780ed885b"
	I1121 14:47:56.792607 2852540 cri.go:89] found id: "b011fda5b154db9e2a0ab88b094fd30c123607858f7144eb0187d89bc6c74ac9"
	I1121 14:47:56.792611 2852540 cri.go:89] found id: "f3dd66e01305aa67da4fef766c626727d676c7ffe74473a1010270d904b974d1"
	I1121 14:47:56.792621 2852540 cri.go:89] found id: "7e57e7c8851a9cc8ab9aae48190e5273f29aca6479946be08dd8ce6aae53eae4"
	I1121 14:47:56.792625 2852540 cri.go:89] found id: "05bfdef30141a8e21622a5df2d0b5fad2030cdf0b24ad8c65c35f99be64b97da"
	I1121 14:47:56.792640 2852540 cri.go:89] found id: "e51ffcbc830b08843be90ae4a5cbc20e3b6d6721e6d01983023416c9a7ebff67"
	I1121 14:47:56.792644 2852540 cri.go:89] found id: "670da2ec0c5a22405cd819ddba5cacc0165673f1fa923b5507091c8767428c9e"
	I1121 14:47:56.792647 2852540 cri.go:89] found id: "8f30bcc0ffef68f33676c531a54c185943fd5843eeb062e2a7a47fc41ccff421"
	I1121 14:47:56.792650 2852540 cri.go:89] found id: ""
	I1121 14:47:56.792697 2852540 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1121 14:47:56.838753 2852540 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"78fc78989bb3da213502a60531419000303d98f02aa9059a0dc4195ead8b64d1","pid":0,"status":"stopped","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/78fc78989bb3da213502a60531419000303d98f02aa9059a0dc4195ead8b64d1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/78fc78989bb3da213502a60531419000303d98f02aa9059a0dc4195ead8b64d1/rootfs","created":"0001-01-01T00:00:00Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"78fc78989bb3da213502a60531419000303d98f02aa9059a0dc4195ead8b64d1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-208006_749c23d679ae75c87c21138a837c7997","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.
cri.sandbox-name":"etcd-no-preload-208006","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"749c23d679ae75c87c21138a837c7997"},"owner":"root"},{"ociVersion":"1.2.1","id":"dbeb041c180bfd8efe1a8378342d4b1cc19fc87407e31aa23f81d737902ef8f6","pid":924,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dbeb041c180bfd8efe1a8378342d4b1cc19fc87407e31aa23f81d737902ef8f6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dbeb041c180bfd8efe1a8378342d4b1cc19fc87407e31aa23f81d737902ef8f6/rootfs","created":"0001-01-01T00:00:00Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"dbeb041c180bfd8efe1a8378342d4b1cc19fc87407e31aa23f81d737902ef8f6","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/k
ube-system_kube-controller-manager-no-preload-208006_e90a541b9ba9814a80f6149e26dd1e8d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-no-preload-208006","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e90a541b9ba9814a80f6149e26dd1e8d"},"owner":"root"}]
	I1121 14:47:56.838841 2852540 cri.go:126] list returned 2 containers
	I1121 14:47:56.838850 2852540 cri.go:129] container: {ID:78fc78989bb3da213502a60531419000303d98f02aa9059a0dc4195ead8b64d1 Status:stopped}
	I1121 14:47:56.838873 2852540 cri.go:131] skipping 78fc78989bb3da213502a60531419000303d98f02aa9059a0dc4195ead8b64d1 - not in ps
	I1121 14:47:56.838878 2852540 cri.go:129] container: {ID:dbeb041c180bfd8efe1a8378342d4b1cc19fc87407e31aa23f81d737902ef8f6 Status:created}
	I1121 14:47:56.838883 2852540 cri.go:131] skipping dbeb041c180bfd8efe1a8378342d4b1cc19fc87407e31aa23f81d737902ef8f6 - not in ps
	I1121 14:47:56.838941 2852540 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:47:56.865391 2852540 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1121 14:47:56.865408 2852540 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1121 14:47:56.865466 2852540 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1121 14:47:56.888216 2852540 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1121 14:47:56.889137 2852540 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-208006" does not appear in /home/jenkins/minikube-integration/21847-2633933/kubeconfig
	I1121 14:47:56.889657 2852540 kubeconfig.go:62] /home/jenkins/minikube-integration/21847-2633933/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-208006" cluster setting kubeconfig missing "no-preload-208006" context setting]
	I1121 14:47:56.890383 2852540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/kubeconfig: {Name:mkd905aaf74d26e32c0b3e46a7edfbf13f4b98ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:47:56.894225 2852540 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1121 14:47:56.930744 2852540 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1121 14:47:56.930774 2852540 kubeadm.go:602] duration metric: took 65.354322ms to restartPrimaryControlPlane
	I1121 14:47:56.930783 2852540 kubeadm.go:403] duration metric: took 214.463799ms to StartCluster
	I1121 14:47:56.930798 2852540 settings.go:142] acquiring lock: {Name:mkd6064915932eca5a3b1d70feb4ec8240f340da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:47:56.930870 2852540 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-2633933/kubeconfig
	I1121 14:47:56.932293 2852540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/kubeconfig: {Name:mkd905aaf74d26e32c0b3e46a7edfbf13f4b98ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:47:56.932504 2852540 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:47:56.932887 2852540 config.go:182] Loaded profile config "no-preload-208006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:47:56.932927 2852540 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:47:56.932994 2852540 addons.go:70] Setting storage-provisioner=true in profile "no-preload-208006"
	I1121 14:47:56.933008 2852540 addons.go:239] Setting addon storage-provisioner=true in "no-preload-208006"
	W1121 14:47:56.933013 2852540 addons.go:248] addon storage-provisioner should already be in state true
	I1121 14:47:56.933488 2852540 host.go:66] Checking if "no-preload-208006" exists ...
	I1121 14:47:56.933677 2852540 addons.go:70] Setting default-storageclass=true in profile "no-preload-208006"
	I1121 14:47:56.933692 2852540 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-208006"
	I1121 14:47:56.933949 2852540 cli_runner.go:164] Run: docker container inspect no-preload-208006 --format={{.State.Status}}
	I1121 14:47:56.934470 2852540 cli_runner.go:164] Run: docker container inspect no-preload-208006 --format={{.State.Status}}
	I1121 14:47:56.947603 2852540 addons.go:70] Setting metrics-server=true in profile "no-preload-208006"
	I1121 14:47:56.947675 2852540 addons.go:239] Setting addon metrics-server=true in "no-preload-208006"
	W1121 14:47:56.947699 2852540 addons.go:248] addon metrics-server should already be in state true
	I1121 14:47:56.947764 2852540 host.go:66] Checking if "no-preload-208006" exists ...
	I1121 14:47:56.948255 2852540 cli_runner.go:164] Run: docker container inspect no-preload-208006 --format={{.State.Status}}
	I1121 14:47:56.964826 2852540 addons.go:70] Setting dashboard=true in profile "no-preload-208006"
	I1121 14:47:56.964859 2852540 addons.go:239] Setting addon dashboard=true in "no-preload-208006"
	W1121 14:47:56.967011 2852540 addons.go:248] addon dashboard should already be in state true
	I1121 14:47:56.967066 2852540 host.go:66] Checking if "no-preload-208006" exists ...
	I1121 14:47:56.969565 2852540 cli_runner.go:164] Run: docker container inspect no-preload-208006 --format={{.State.Status}}
	I1121 14:47:56.975001 2852540 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:47:56.977908 2852540 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:47:56.977937 2852540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:47:56.978005 2852540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:47:56.982870 2852540 out.go:179] * Verifying Kubernetes components...
	I1121 14:47:56.993147 2852540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:47:57.051038 2852540 addons.go:239] Setting addon default-storageclass=true in "no-preload-208006"
	W1121 14:47:57.051062 2852540 addons.go:248] addon default-storageclass should already be in state true
	I1121 14:47:57.051087 2852540 host.go:66] Checking if "no-preload-208006" exists ...
	I1121 14:47:57.051486 2852540 cli_runner.go:164] Run: docker container inspect no-preload-208006 --format={{.State.Status}}
	I1121 14:47:57.051687 2852540 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1121 14:47:57.057232 2852540 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1121 14:47:57.057259 2852540 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1121 14:47:57.057351 2852540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:47:57.061353 2852540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36740 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/no-preload-208006/id_rsa Username:docker}
	I1121 14:47:57.067400 2852540 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1121 14:47:57.077439 2852540 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1121 14:47:57.080409 2852540 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1121 14:47:57.080441 2852540 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1121 14:47:57.080513 2852540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:47:57.097088 2852540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36740 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/no-preload-208006/id_rsa Username:docker}
	I1121 14:47:57.113241 2852540 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:47:57.113261 2852540 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:47:57.113323 2852540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208006
	I1121 14:47:57.137369 2852540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36740 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/no-preload-208006/id_rsa Username:docker}
	I1121 14:47:57.154104 2852540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36740 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/no-preload-208006/id_rsa Username:docker}
	I1121 14:47:57.436524 2852540 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:47:57.480782 2852540 node_ready.go:35] waiting up to 6m0s for node "no-preload-208006" to be "Ready" ...
	I1121 14:47:57.516584 2852540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:47:57.724922 2852540 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1121 14:47:57.724944 2852540 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1121 14:47:57.754479 2852540 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1121 14:47:57.754502 2852540 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1121 14:47:57.914333 2852540 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1121 14:47:57.914356 2852540 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1121 14:47:57.971277 2852540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:47:58.095023 2852540 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1121 14:47:58.095052 2852540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1121 14:47:58.245627 2852540 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1121 14:47:58.245654 2852540 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1121 14:47:58.330608 2852540 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1121 14:47:58.330636 2852540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1121 14:47:58.431401 2852540 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1121 14:47:58.431430 2852540 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1121 14:47:58.473925 2852540 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1121 14:47:58.473952 2852540 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1121 14:47:58.540571 2852540 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1121 14:47:58.540597 2852540 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1121 14:47:58.606294 2852540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1121 14:47:58.661298 2852540 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1121 14:47:58.661328 2852540 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1121 14:47:58.760325 2852540 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1121 14:47:58.760356 2852540 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1121 14:47:58.834751 2852540 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1121 14:47:58.834775 2852540 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1121 14:47:58.874625 2852540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	c1e6b3e96fde8       1611cd07b61d5       10 seconds ago       Running             busybox                   0                   a4dbf3d722bf6       busybox                                      default
	1fa3deca4f712       138784d87c9c5       16 seconds ago       Running             coredns                   0                   cd78e9aa80e31       coredns-66bc5c9577-fs65k                     kube-system
	c55d3da92c0df       ba04bb24b9575       16 seconds ago       Running             storage-provisioner       0                   e0215c9353668       storage-provisioner                          kube-system
	a5a9aa39a69c5       b1a8c6f707935       57 seconds ago       Running             kindnet-cni               0                   e09682004aff5       kindnet-7hksz                                kube-system
	9314412663f4f       05baa95f5142d       58 seconds ago       Running             kube-proxy                0                   ed42d49616af7       kube-proxy-r9v4p                             kube-system
	47316d8361377       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   ed021ff9e7e8a       kube-controller-manager-embed-certs-695324   kube-system
	8089a6de675f7       a1894772a478e       About a minute ago   Running             etcd                      0                   ed4fb53003b44       etcd-embed-certs-695324                      kube-system
	9d593bf1d15ca       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   9cd78d9766fd3       kube-apiserver-embed-certs-695324            kube-system
	3ab7a046e22a5       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   9583f713252bb       kube-scheduler-embed-certs-695324            kube-system
	
	
	==> containerd <==
	Nov 21 14:47:45 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:45.585569403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fs65k,Uid:a1f4fe7a-90b6-4b7a-8bdd-e805634b811d,Namespace:kube-system,Attempt:0,}"
	Nov 21 14:47:45 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:45.635438419Z" level=info msg="connecting to shim cd78e9aa80e31bd3094296d8c773ac858d8ca7bb1957cc480cef10de9448834b" address="unix:///run/containerd/s/422a86a19d4f1b65d6df97e09f322b6ab2f0266e979c4539a7c7f487f15e47d0" namespace=k8s.io protocol=ttrpc version=3
	Nov 21 14:47:45 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:45.693207589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fs65k,Uid:a1f4fe7a-90b6-4b7a-8bdd-e805634b811d,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd78e9aa80e31bd3094296d8c773ac858d8ca7bb1957cc480cef10de9448834b\""
	Nov 21 14:47:45 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:45.701313088Z" level=info msg="CreateContainer within sandbox \"cd78e9aa80e31bd3094296d8c773ac858d8ca7bb1957cc480cef10de9448834b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 21 14:47:45 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:45.716215117Z" level=info msg="Container 1fa3deca4f712ab99b7ba61a6a86cc5dfba490db5336b7ea06ee51c89f8a3fa2: CDI devices from CRI Config.CDIDevices: []"
	Nov 21 14:47:45 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:45.726321862Z" level=info msg="CreateContainer within sandbox \"cd78e9aa80e31bd3094296d8c773ac858d8ca7bb1957cc480cef10de9448834b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1fa3deca4f712ab99b7ba61a6a86cc5dfba490db5336b7ea06ee51c89f8a3fa2\""
	Nov 21 14:47:45 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:45.727165276Z" level=info msg="StartContainer for \"1fa3deca4f712ab99b7ba61a6a86cc5dfba490db5336b7ea06ee51c89f8a3fa2\""
	Nov 21 14:47:45 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:45.729263455Z" level=info msg="connecting to shim 1fa3deca4f712ab99b7ba61a6a86cc5dfba490db5336b7ea06ee51c89f8a3fa2" address="unix:///run/containerd/s/422a86a19d4f1b65d6df97e09f322b6ab2f0266e979c4539a7c7f487f15e47d0" protocol=ttrpc version=3
	Nov 21 14:47:45 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:45.808213646Z" level=info msg="StartContainer for \"1fa3deca4f712ab99b7ba61a6a86cc5dfba490db5336b7ea06ee51c89f8a3fa2\" returns successfully"
	Nov 21 14:47:48 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:48.838284601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:2825f6cd-a93e-4f6a-9629-98e365849793,Namespace:default,Attempt:0,}"
	Nov 21 14:47:48 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:48.909728069Z" level=info msg="connecting to shim a4dbf3d722bf60eb6930556fb80444d529fe992c62e7e70b2a0987893d78ae01" address="unix:///run/containerd/s/458723e91383e9bdc5dc0fc8854af6cfe500486d5aaaa77752a154c3b28780ce" namespace=k8s.io protocol=ttrpc version=3
	Nov 21 14:47:49 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:49.019881957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:2825f6cd-a93e-4f6a-9629-98e365849793,Namespace:default,Attempt:0,} returns sandbox id \"a4dbf3d722bf60eb6930556fb80444d529fe992c62e7e70b2a0987893d78ae01\""
	Nov 21 14:47:49 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:49.023172849Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 21 14:47:51 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:51.153824896Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:47:51 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:51.155810800Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937186"
	Nov 21 14:47:51 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:51.158272965Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:47:51 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:51.161588628Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:47:51 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:51.162202797Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.138792202s"
	Nov 21 14:47:51 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:51.162351757Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 21 14:47:51 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:51.171370656Z" level=info msg="CreateContainer within sandbox \"a4dbf3d722bf60eb6930556fb80444d529fe992c62e7e70b2a0987893d78ae01\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 21 14:47:51 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:51.185103783Z" level=info msg="Container c1e6b3e96fde8b1d1ca7d0ec7c0eaef38e40b6083887df3831e8038feefed77c: CDI devices from CRI Config.CDIDevices: []"
	Nov 21 14:47:51 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:51.200956333Z" level=info msg="CreateContainer within sandbox \"a4dbf3d722bf60eb6930556fb80444d529fe992c62e7e70b2a0987893d78ae01\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"c1e6b3e96fde8b1d1ca7d0ec7c0eaef38e40b6083887df3831e8038feefed77c\""
	Nov 21 14:47:51 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:51.203475465Z" level=info msg="StartContainer for \"c1e6b3e96fde8b1d1ca7d0ec7c0eaef38e40b6083887df3831e8038feefed77c\""
	Nov 21 14:47:51 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:51.204569973Z" level=info msg="connecting to shim c1e6b3e96fde8b1d1ca7d0ec7c0eaef38e40b6083887df3831e8038feefed77c" address="unix:///run/containerd/s/458723e91383e9bdc5dc0fc8854af6cfe500486d5aaaa77752a154c3b28780ce" protocol=ttrpc version=3
	Nov 21 14:47:51 embed-certs-695324 containerd[760]: time="2025-11-21T14:47:51.281790809Z" level=info msg="StartContainer for \"c1e6b3e96fde8b1d1ca7d0ec7c0eaef38e40b6083887df3831e8038feefed77c\" returns successfully"
	
	
	==> coredns [1fa3deca4f712ab99b7ba61a6a86cc5dfba490db5336b7ea06ee51c89f8a3fa2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51493 - 30880 "HINFO IN 8925272948188809987.8459650712627436794. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.049608354s
	
	
	==> describe nodes <==
	Name:               embed-certs-695324
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-695324
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=embed-certs-695324
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_46_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:46:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-695324
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:47:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:47:58 +0000   Fri, 21 Nov 2025 14:46:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:47:58 +0000   Fri, 21 Nov 2025 14:46:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:47:58 +0000   Fri, 21 Nov 2025 14:46:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:47:58 +0000   Fri, 21 Nov 2025 14:47:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-695324
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                b11a29ee-cc79-4f00-a5de-9472aa7b6725
	  Boot ID:                    41b0e09d-5a9a-49c9-8980-dca608ba3fce
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  kube-system                 coredns-66bc5c9577-fs65k                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     59s
	  kube-system                 etcd-embed-certs-695324                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         65s
	  kube-system                 kindnet-7hksz                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      60s
	  kube-system                 kube-apiserver-embed-certs-695324             250m (12%)    0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 kube-controller-manager-embed-certs-695324    200m (10%)    0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kube-proxy-r9v4p                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-scheduler-embed-certs-695324             100m (5%)     0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 57s                kube-proxy       
	  Normal   NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 78s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  78s (x8 over 78s)  kubelet          Node embed-certs-695324 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    78s (x8 over 78s)  kubelet          Node embed-certs-695324 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     78s (x7 over 78s)  kubelet          Node embed-certs-695324 status is now: NodeHasSufficientPID
	  Normal   Starting                 78s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 66s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 66s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  65s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  65s                kubelet          Node embed-certs-695324 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    65s                kubelet          Node embed-certs-695324 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     65s                kubelet          Node embed-certs-695324 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           61s                node-controller  Node embed-certs-695324 event: Registered Node embed-certs-695324 in Controller
	  Normal   NodeReady                18s                kubelet          Node embed-certs-695324 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 13:02] overlayfs: idmapped layers are currently not supported
	[Nov21 13:03] overlayfs: idmapped layers are currently not supported
	[Nov21 13:06] overlayfs: idmapped layers are currently not supported
	[Nov21 13:08] overlayfs: idmapped layers are currently not supported
	[Nov21 13:09] overlayfs: idmapped layers are currently not supported
	[Nov21 13:10] overlayfs: idmapped layers are currently not supported
	[ +19.808801] overlayfs: idmapped layers are currently not supported
	[Nov21 13:11] overlayfs: idmapped layers are currently not supported
	[Nov21 13:12] overlayfs: idmapped layers are currently not supported
	[Nov21 13:13] overlayfs: idmapped layers are currently not supported
	[Nov21 13:14] overlayfs: idmapped layers are currently not supported
	[Nov21 13:15] overlayfs: idmapped layers are currently not supported
	[ +16.772572] overlayfs: idmapped layers are currently not supported
	[Nov21 13:16] overlayfs: idmapped layers are currently not supported
	[Nov21 13:17] overlayfs: idmapped layers are currently not supported
	[ +27.396777] overlayfs: idmapped layers are currently not supported
	[Nov21 13:18] overlayfs: idmapped layers are currently not supported
	[ +25.430119] overlayfs: idmapped layers are currently not supported
	[Nov21 13:19] overlayfs: idmapped layers are currently not supported
	[Nov21 13:20] overlayfs: idmapped layers are currently not supported
	[Nov21 13:21] overlayfs: idmapped layers are currently not supported
	[Nov21 13:22] overlayfs: idmapped layers are currently not supported
	[Nov21 13:23] overlayfs: idmapped layers are currently not supported
	[Nov21 13:24] overlayfs: idmapped layers are currently not supported
	[Nov21 13:55] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [8089a6de675f779d3fb989cf34d1ec9a6079eb37021c60f8536d10642bd9eade] <==
	{"level":"warn","ts":"2025-11-21T14:46:51.846673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:51.883085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:51.914407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:51.924987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:51.967070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:51.987467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.022282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.054456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.122238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.143065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.166887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.193193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.225450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.245230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.278757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.303610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.333910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.355045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.379822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.423334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.484845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.528051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.570267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.602178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:46:52.725173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39822","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:48:02 up 19:30,  0 user,  load average: 4.51, 3.56, 2.97
	Linux embed-certs-695324 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a5a9aa39a69c531337c2df1b274bb0a10160a7c35003839895783b3de7fbf962] <==
	I1121 14:47:04.622383       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:47:04.622660       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1121 14:47:04.622783       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:47:04.622795       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:47:04.622808       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:47:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:47:04.824310       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:47:04.824327       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:47:04.824336       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:47:04.824644       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1121 14:47:34.829354       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1121 14:47:34.829504       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1121 14:47:34.829597       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1121 14:47:34.829681       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1121 14:47:36.424963       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:47:36.425000       1 metrics.go:72] Registering metrics
	I1121 14:47:36.425092       1 controller.go:711] "Syncing nftables rules"
	I1121 14:47:44.829617       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1121 14:47:44.829675       1 main.go:301] handling current node
	I1121 14:47:54.825122       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1121 14:47:54.825288       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9d593bf1d15ca44ecf7c0dfbba9a918a766622933ff8bbdee876cc68aea573f9] <==
	I1121 14:46:53.908650       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1121 14:46:53.913098       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:46:53.915636       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1121 14:46:53.920943       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 14:46:53.923599       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:46:53.923641       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 14:46:54.085942       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:46:54.571719       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1121 14:46:54.591302       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1121 14:46:54.591325       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:46:55.824240       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:46:55.875106       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:46:55.971922       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1121 14:46:55.981167       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1121 14:46:55.982658       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 14:46:55.990959       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:46:56.842003       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:46:56.872118       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 14:46:56.969794       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1121 14:46:57.034350       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1121 14:47:02.162738       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:47:02.169366       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:47:02.660134       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1121 14:47:02.960766       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1121 14:47:56.732181       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:46244: use of closed network connection
	
	
	==> kube-controller-manager [47316d836137789125d57ba9c739c2e03666cfd1e711824a4e9100be521f1a8c] <==
	I1121 14:47:01.846712       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-695324"
	I1121 14:47:01.846861       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1121 14:47:01.849468       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1121 14:47:01.861836       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1121 14:47:01.866355       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1121 14:47:01.866467       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1121 14:47:01.866550       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1121 14:47:01.868251       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 14:47:01.868521       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1121 14:47:01.868569       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1121 14:47:01.868626       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1121 14:47:01.868744       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1121 14:47:01.871795       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1121 14:47:01.871990       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1121 14:47:01.872712       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:47:01.873418       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1121 14:47:01.873526       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1121 14:47:01.873540       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1121 14:47:01.873548       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1121 14:47:01.880002       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1121 14:47:01.891842       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1121 14:47:01.903582       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:47:01.903724       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1121 14:47:01.903789       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1121 14:47:46.852997       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [9314412663f4fd283ab31086847bd67f2e7c6d2447448091c9c117bf267f7ca1] <==
	I1121 14:47:04.399232       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:47:04.562678       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:47:04.664397       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:47:04.664435       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1121 14:47:04.664511       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:47:04.775740       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:47:04.775795       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:47:04.789783       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:47:04.790086       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:47:04.790110       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:47:04.795407       1 config.go:200] "Starting service config controller"
	I1121 14:47:04.795426       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:47:04.795448       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:47:04.795452       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:47:04.795464       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:47:04.795468       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:47:04.796083       1 config.go:309] "Starting node config controller"
	I1121 14:47:04.796090       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:47:04.796096       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:47:04.897817       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 14:47:04.897826       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 14:47:04.897858       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3ab7a046e22a5863f4b346224e2f97c150b588c9db1300593a985d262da67008] <==
	E1121 14:46:53.856032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 14:46:53.861230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 14:46:53.861531       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 14:46:53.861678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 14:46:53.871029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 14:46:54.659892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 14:46:54.690709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 14:46:54.746032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 14:46:54.754050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 14:46:54.781352       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 14:46:54.862489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 14:46:54.872673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 14:46:54.874528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 14:46:54.893411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 14:46:54.983104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 14:46:54.989643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 14:46:55.047213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 14:46:55.080021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 14:46:55.169612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 14:46:55.187169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:46:55.201968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 14:46:55.205562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 14:46:55.237978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1121 14:46:55.290233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1121 14:46:57.518380       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:46:57 embed-certs-695324 kubelet[1475]: I1121 14:46:57.989192    1475 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 21 14:46:58 embed-certs-695324 kubelet[1475]: I1121 14:46:58.136219    1475 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-695324" podStartSLOduration=1.1361956229999999 podStartE2EDuration="1.136195623s" podCreationTimestamp="2025-11-21 14:46:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:46:58.126492132 +0000 UTC m=+1.377369443" watchObservedRunningTime="2025-11-21 14:46:58.136195623 +0000 UTC m=+1.387072918"
	Nov 21 14:47:01 embed-certs-695324 kubelet[1475]: I1121 14:47:01.863023    1475 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 21 14:47:01 embed-certs-695324 kubelet[1475]: I1121 14:47:01.865290    1475 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 21 14:47:02 embed-certs-695324 kubelet[1475]: I1121 14:47:02.904457    1475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0cf5293a-46ff-4f69-ba6e-3e4d9bb7e8c1-cni-cfg\") pod \"kindnet-7hksz\" (UID: \"0cf5293a-46ff-4f69-ba6e-3e4d9bb7e8c1\") " pod="kube-system/kindnet-7hksz"
	Nov 21 14:47:02 embed-certs-695324 kubelet[1475]: I1121 14:47:02.904505    1475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0cf5293a-46ff-4f69-ba6e-3e4d9bb7e8c1-xtables-lock\") pod \"kindnet-7hksz\" (UID: \"0cf5293a-46ff-4f69-ba6e-3e4d9bb7e8c1\") " pod="kube-system/kindnet-7hksz"
	Nov 21 14:47:02 embed-certs-695324 kubelet[1475]: I1121 14:47:02.904532    1475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc8f8403-f2a8-4b2a-9e2e-1be4b561b237-xtables-lock\") pod \"kube-proxy-r9v4p\" (UID: \"dc8f8403-f2a8-4b2a-9e2e-1be4b561b237\") " pod="kube-system/kube-proxy-r9v4p"
	Nov 21 14:47:02 embed-certs-695324 kubelet[1475]: I1121 14:47:02.904549    1475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc8f8403-f2a8-4b2a-9e2e-1be4b561b237-lib-modules\") pod \"kube-proxy-r9v4p\" (UID: \"dc8f8403-f2a8-4b2a-9e2e-1be4b561b237\") " pod="kube-system/kube-proxy-r9v4p"
	Nov 21 14:47:02 embed-certs-695324 kubelet[1475]: I1121 14:47:02.904566    1475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnsl5\" (UniqueName: \"kubernetes.io/projected/dc8f8403-f2a8-4b2a-9e2e-1be4b561b237-kube-api-access-wnsl5\") pod \"kube-proxy-r9v4p\" (UID: \"dc8f8403-f2a8-4b2a-9e2e-1be4b561b237\") " pod="kube-system/kube-proxy-r9v4p"
	Nov 21 14:47:02 embed-certs-695324 kubelet[1475]: I1121 14:47:02.904586    1475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0cf5293a-46ff-4f69-ba6e-3e4d9bb7e8c1-lib-modules\") pod \"kindnet-7hksz\" (UID: \"0cf5293a-46ff-4f69-ba6e-3e4d9bb7e8c1\") " pod="kube-system/kindnet-7hksz"
	Nov 21 14:47:02 embed-certs-695324 kubelet[1475]: I1121 14:47:02.904607    1475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6ftp\" (UniqueName: \"kubernetes.io/projected/0cf5293a-46ff-4f69-ba6e-3e4d9bb7e8c1-kube-api-access-d6ftp\") pod \"kindnet-7hksz\" (UID: \"0cf5293a-46ff-4f69-ba6e-3e4d9bb7e8c1\") " pod="kube-system/kindnet-7hksz"
	Nov 21 14:47:02 embed-certs-695324 kubelet[1475]: I1121 14:47:02.904643    1475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dc8f8403-f2a8-4b2a-9e2e-1be4b561b237-kube-proxy\") pod \"kube-proxy-r9v4p\" (UID: \"dc8f8403-f2a8-4b2a-9e2e-1be4b561b237\") " pod="kube-system/kube-proxy-r9v4p"
	Nov 21 14:47:03 embed-certs-695324 kubelet[1475]: I1121 14:47:03.097764    1475 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 21 14:47:05 embed-certs-695324 kubelet[1475]: I1121 14:47:05.319323    1475 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r9v4p" podStartSLOduration=3.319302056 podStartE2EDuration="3.319302056s" podCreationTimestamp="2025-11-21 14:47:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:47:05.315091042 +0000 UTC m=+8.565968345" watchObservedRunningTime="2025-11-21 14:47:05.319302056 +0000 UTC m=+8.570179350"
	Nov 21 14:47:06 embed-certs-695324 kubelet[1475]: I1121 14:47:06.155467    1475 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-7hksz" podStartSLOduration=4.155444934 podStartE2EDuration="4.155444934s" podCreationTimestamp="2025-11-21 14:47:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:47:05.41347933 +0000 UTC m=+8.664356633" watchObservedRunningTime="2025-11-21 14:47:06.155444934 +0000 UTC m=+9.406322237"
	Nov 21 14:47:44 embed-certs-695324 kubelet[1475]: I1121 14:47:44.917850    1475 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 21 14:47:45 embed-certs-695324 kubelet[1475]: I1121 14:47:45.060892    1475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqcrg\" (UniqueName: \"kubernetes.io/projected/d5cc5588-78f5-4ba3-8929-ce406ad776cc-kube-api-access-xqcrg\") pod \"storage-provisioner\" (UID: \"d5cc5588-78f5-4ba3-8929-ce406ad776cc\") " pod="kube-system/storage-provisioner"
	Nov 21 14:47:45 embed-certs-695324 kubelet[1475]: I1121 14:47:45.061217    1475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d5cc5588-78f5-4ba3-8929-ce406ad776cc-tmp\") pod \"storage-provisioner\" (UID: \"d5cc5588-78f5-4ba3-8929-ce406ad776cc\") " pod="kube-system/storage-provisioner"
	Nov 21 14:47:45 embed-certs-695324 kubelet[1475]: I1121 14:47:45.161747    1475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1f4fe7a-90b6-4b7a-8bdd-e805634b811d-config-volume\") pod \"coredns-66bc5c9577-fs65k\" (UID: \"a1f4fe7a-90b6-4b7a-8bdd-e805634b811d\") " pod="kube-system/coredns-66bc5c9577-fs65k"
	Nov 21 14:47:45 embed-certs-695324 kubelet[1475]: I1121 14:47:45.162108    1475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66f6j\" (UniqueName: \"kubernetes.io/projected/a1f4fe7a-90b6-4b7a-8bdd-e805634b811d-kube-api-access-66f6j\") pod \"coredns-66bc5c9577-fs65k\" (UID: \"a1f4fe7a-90b6-4b7a-8bdd-e805634b811d\") " pod="kube-system/coredns-66bc5c9577-fs65k"
	Nov 21 14:47:46 embed-certs-695324 kubelet[1475]: I1121 14:47:46.417213    1475 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.41719619 podStartE2EDuration="41.41719619s" podCreationTimestamp="2025-11-21 14:47:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:47:46.416975183 +0000 UTC m=+49.667852486" watchObservedRunningTime="2025-11-21 14:47:46.41719619 +0000 UTC m=+49.668073493"
	Nov 21 14:47:48 embed-certs-695324 kubelet[1475]: I1121 14:47:48.505568    1475 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-fs65k" podStartSLOduration=45.505542724 podStartE2EDuration="45.505542724s" podCreationTimestamp="2025-11-21 14:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:47:46.433712318 +0000 UTC m=+49.684589621" watchObservedRunningTime="2025-11-21 14:47:48.505542724 +0000 UTC m=+51.756420043"
	Nov 21 14:47:48 embed-certs-695324 kubelet[1475]: I1121 14:47:48.689851    1475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97n7t\" (UniqueName: \"kubernetes.io/projected/2825f6cd-a93e-4f6a-9629-98e365849793-kube-api-access-97n7t\") pod \"busybox\" (UID: \"2825f6cd-a93e-4f6a-9629-98e365849793\") " pod="default/busybox"
	Nov 21 14:47:56 embed-certs-695324 kubelet[1475]: E1121 14:47:56.732777    1475 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 192.168.76.2:59294->192.168.76.2:10010: write tcp 192.168.76.2:10250->192.168.76.2:38716: write: connection reset by peer
	Nov 21 14:47:56 embed-certs-695324 kubelet[1475]: E1121 14:47:56.733179    1475 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.76.2:59294->192.168.76.2:10010: write tcp 192.168.76.2:59294->192.168.76.2:10010: write: broken pipe
	
	
	==> storage-provisioner [c55d3da92c0df8483626a4c994c86da24be9e5fcfcd848573b5ecc5ef7788bc7] <==
	I1121 14:47:45.490746       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1121 14:47:45.493701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:45.499823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:47:45.500170       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 14:47:45.500454       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-695324_3b3a2498-460f-4504-9aed-c216de4806f5!
	I1121 14:47:45.501589       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"95ae73aa-dde4-4062-8331-e524dfe4331a", APIVersion:"v1", ResourceVersion:"460", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-695324_3b3a2498-460f-4504-9aed-c216de4806f5 became leader
	W1121 14:47:45.510041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:45.513534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:47:45.601588       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-695324_3b3a2498-460f-4504-9aed-c216de4806f5!
	W1121 14:47:47.516976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:47.521578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:49.525561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:49.536122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:51.539371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:51.544286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:53.550082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:53.559068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:55.563294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:55.569943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:57.574883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:57.595136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:59.598863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:47:59.603897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:48:01.607652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:48:01.615678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-695324 -n embed-certs-695324
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-695324 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (15.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (14.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-219338 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a18b0d57-5cfa-4219-961a-f30cbe26f965] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a18b0d57-5cfa-4219-961a-f30cbe26f965] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.013277225s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-219338 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-219338
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-219338:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f206fc018ba2f9ec3dc45af716d2148b75dfb158831ec03a387c554a1752aade",
	        "Created": "2025-11-21T14:49:15.38263101Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2860971,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:49:15.443616996Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/f206fc018ba2f9ec3dc45af716d2148b75dfb158831ec03a387c554a1752aade/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f206fc018ba2f9ec3dc45af716d2148b75dfb158831ec03a387c554a1752aade/hostname",
	        "HostsPath": "/var/lib/docker/containers/f206fc018ba2f9ec3dc45af716d2148b75dfb158831ec03a387c554a1752aade/hosts",
	        "LogPath": "/var/lib/docker/containers/f206fc018ba2f9ec3dc45af716d2148b75dfb158831ec03a387c554a1752aade/f206fc018ba2f9ec3dc45af716d2148b75dfb158831ec03a387c554a1752aade-json.log",
	        "Name": "/default-k8s-diff-port-219338",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-219338:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-219338",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f206fc018ba2f9ec3dc45af716d2148b75dfb158831ec03a387c554a1752aade",
	                "LowerDir": "/var/lib/docker/overlay2/202aec692d493c87f1b4bea263011ab454a4e0cd9a8cbcec95157ab17dd9d92b-init/diff:/var/lib/docker/overlay2/789a4b9f9866e585907664b1eaf98d94438dbf699e0511f3ca5ba5ea682b005e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/202aec692d493c87f1b4bea263011ab454a4e0cd9a8cbcec95157ab17dd9d92b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/202aec692d493c87f1b4bea263011ab454a4e0cd9a8cbcec95157ab17dd9d92b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/202aec692d493c87f1b4bea263011ab454a4e0cd9a8cbcec95157ab17dd9d92b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-219338",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-219338/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-219338",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-219338",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-219338",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3a79f8634a9793006f40030934ff9c7c16885cb3c8909b9a75e467005269452a",
	            "SandboxKey": "/var/run/docker/netns/3a79f8634a97",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36750"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36751"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36754"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36752"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36753"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-219338": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:41:9e:93:b6:cf",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "68078880d2983097a6f8a3152596472180ac5d76d57269e8f9db42495c53210d",
	                    "EndpointID": "0f8cb31d5e3095b1cd9c6a6f36bf3d7c72e8fb27df33849940f2ce58ab5fb334",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-219338",
	                        "f206fc018ba2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-219338 -n default-k8s-diff-port-219338
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-219338 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-219338 logs -n 25: (1.47146844s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ addons  │ enable dashboard -p embed-certs-695324 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-695324           │ jenkins │ v1.37.0 │ 21 Nov 25 14:48 UTC │ 21 Nov 25 14:48 UTC │
	│ start   │ -p embed-certs-695324 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-695324           │ jenkins │ v1.37.0 │ 21 Nov 25 14:48 UTC │ 21 Nov 25 14:49 UTC │
	│ image   │ no-preload-208006 image list --format=json                                                                                                                                                                                                          │ no-preload-208006            │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:49 UTC │
	│ pause   │ -p no-preload-208006 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-208006            │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:49 UTC │
	│ unpause │ -p no-preload-208006 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-208006            │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:49 UTC │
	│ delete  │ -p no-preload-208006                                                                                                                                                                                                                                │ no-preload-208006            │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:49 UTC │
	│ delete  │ -p no-preload-208006                                                                                                                                                                                                                                │ no-preload-208006            │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:49 UTC │
	│ delete  │ -p disable-driver-mounts-422442                                                                                                                                                                                                                     │ disable-driver-mounts-422442 │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:49 UTC │
	│ start   │ -p default-k8s-diff-port-219338 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-219338 │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:50 UTC │
	│ image   │ embed-certs-695324 image list --format=json                                                                                                                                                                                                         │ embed-certs-695324           │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:49 UTC │
	│ pause   │ -p embed-certs-695324 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-695324           │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:49 UTC │
	│ unpause │ -p embed-certs-695324 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-695324           │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:49 UTC │
	│ delete  │ -p embed-certs-695324                                                                                                                                                                                                                               │ embed-certs-695324           │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:49 UTC │
	│ delete  │ -p embed-certs-695324                                                                                                                                                                                                                               │ embed-certs-695324           │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:49 UTC │
	│ start   │ -p newest-cni-921069 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-921069            │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:50 UTC │
	│ addons  │ enable metrics-server -p newest-cni-921069 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-921069            │ jenkins │ v1.37.0 │ 21 Nov 25 14:50 UTC │ 21 Nov 25 14:50 UTC │
	│ stop    │ -p newest-cni-921069 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-921069            │ jenkins │ v1.37.0 │ 21 Nov 25 14:50 UTC │ 21 Nov 25 14:50 UTC │
	│ addons  │ enable dashboard -p newest-cni-921069 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-921069            │ jenkins │ v1.37.0 │ 21 Nov 25 14:50 UTC │ 21 Nov 25 14:50 UTC │
	│ start   │ -p newest-cni-921069 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-921069            │ jenkins │ v1.37.0 │ 21 Nov 25 14:50 UTC │ 21 Nov 25 14:50 UTC │
	│ image   │ newest-cni-921069 image list --format=json                                                                                                                                                                                                          │ newest-cni-921069            │ jenkins │ v1.37.0 │ 21 Nov 25 14:50 UTC │ 21 Nov 25 14:50 UTC │
	│ pause   │ -p newest-cni-921069 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-921069            │ jenkins │ v1.37.0 │ 21 Nov 25 14:50 UTC │ 21 Nov 25 14:50 UTC │
	│ unpause │ -p newest-cni-921069 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-921069            │ jenkins │ v1.37.0 │ 21 Nov 25 14:50 UTC │ 21 Nov 25 14:50 UTC │
	│ delete  │ -p newest-cni-921069                                                                                                                                                                                                                                │ newest-cni-921069            │ jenkins │ v1.37.0 │ 21 Nov 25 14:50 UTC │ 21 Nov 25 14:50 UTC │
	│ delete  │ -p newest-cni-921069                                                                                                                                                                                                                                │ newest-cni-921069            │ jenkins │ v1.37.0 │ 21 Nov 25 14:50 UTC │ 21 Nov 25 14:50 UTC │
	│ start   │ -p auto-650772 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                       │ auto-650772                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:50 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:50:35
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:50:35.017849 2870677 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:50:35.018076 2870677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:50:35.018091 2870677 out.go:374] Setting ErrFile to fd 2...
	I1121 14:50:35.018097 2870677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:50:35.018386 2870677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-2633933/.minikube/bin
	I1121 14:50:35.018896 2870677 out.go:368] Setting JSON to false
	I1121 14:50:35.019967 2870677 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":70383,"bootTime":1763666252,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1121 14:50:35.020045 2870677 start.go:143] virtualization:  
	I1121 14:50:35.023601 2870677 out.go:179] * [auto-650772] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 14:50:35.027724 2870677 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:50:35.027837 2870677 notify.go:221] Checking for updates...
	I1121 14:50:35.034110 2870677 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:50:35.037179 2870677 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-2633933/kubeconfig
	I1121 14:50:35.040221 2870677 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-2633933/.minikube
	I1121 14:50:35.043222 2870677 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 14:50:35.046269 2870677 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:50:35.049837 2870677 config.go:182] Loaded profile config "default-k8s-diff-port-219338": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:50:35.049959 2870677 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:50:35.095921 2870677 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 14:50:35.096052 2870677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:50:35.253822 2870677 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-21 14:50:35.243614474 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:50:35.253932 2870677 docker.go:319] overlay module found
	I1121 14:50:35.257137 2870677 out.go:179] * Using the docker driver based on user configuration
	I1121 14:50:35.259985 2870677 start.go:309] selected driver: docker
	I1121 14:50:35.260013 2870677 start.go:930] validating driver "docker" against <nil>
	I1121 14:50:35.260029 2870677 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:50:35.260784 2870677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:50:35.370907 2870677 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-21 14:50:35.359825401 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:50:35.371077 2870677 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 14:50:35.371331 2870677 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:50:35.374320 2870677 out.go:179] * Using Docker driver with root privileges
	I1121 14:50:35.377237 2870677 cni.go:84] Creating CNI manager for ""
	I1121 14:50:35.377340 2870677 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:50:35.377355 2870677 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 14:50:35.377474 2870677 start.go:353] cluster config:
	{Name:auto-650772 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-650772 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:con
tainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:50:35.380705 2870677 out.go:179] * Starting "auto-650772" primary control-plane node in "auto-650772" cluster
	I1121 14:50:35.383519 2870677 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1121 14:50:35.386462 2870677 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:50:35.389423 2870677 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:50:35.389488 2870677 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1121 14:50:35.389502 2870677 cache.go:65] Caching tarball of preloaded images
	I1121 14:50:35.389656 2870677 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:50:35.389909 2870677 preload.go:238] Found /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1121 14:50:35.389935 2870677 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1121 14:50:35.390904 2870677 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/auto-650772/config.json ...
	I1121 14:50:35.390943 2870677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/auto-650772/config.json: {Name:mk82699e344dc78b7be36099eaafc18000387f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:50:35.418949 2870677 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:50:35.418975 2870677 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:50:35.418995 2870677 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:50:35.419022 2870677 start.go:360] acquireMachinesLock for auto-650772: {Name:mk277c1ac3cd64b70ae58f78f0535c6ce70f5ac9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:50:35.419169 2870677 start.go:364] duration metric: took 118.635µs to acquireMachinesLock for "auto-650772"
	I1121 14:50:35.419211 2870677 start.go:93] Provisioning new machine with config: &{Name:auto-650772 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-650772 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:50:35.419306 2870677 start.go:125] createHost starting for "" (driver="docker")
	I1121 14:50:34.686282 2860542 pod_ready.go:83] waiting for pod "kube-proxy-s4wjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:50:35.085832 2860542 pod_ready.go:94] pod "kube-proxy-s4wjg" is "Ready"
	I1121 14:50:35.085866 2860542 pod_ready.go:86] duration metric: took 399.557154ms for pod "kube-proxy-s4wjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:50:35.302213 2860542 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-219338" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:50:35.685006 2860542 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-219338" is "Ready"
	I1121 14:50:35.685044 2860542 pod_ready.go:86] duration metric: took 382.803857ms for pod "kube-scheduler-default-k8s-diff-port-219338" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:50:35.685058 2860542 pod_ready.go:40] duration metric: took 1.605191747s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:50:35.779448 2860542 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1121 14:50:35.784433 2860542 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-219338" cluster and "default" namespace by default
	I1121 14:50:35.424635 2870677 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1121 14:50:35.424998 2870677 start.go:159] libmachine.API.Create for "auto-650772" (driver="docker")
	I1121 14:50:35.425123 2870677 client.go:173] LocalClient.Create starting
	I1121 14:50:35.425263 2870677 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem
	I1121 14:50:35.425320 2870677 main.go:143] libmachine: Decoding PEM data...
	I1121 14:50:35.425342 2870677 main.go:143] libmachine: Parsing certificate...
	I1121 14:50:35.425416 2870677 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem
	I1121 14:50:35.425456 2870677 main.go:143] libmachine: Decoding PEM data...
	I1121 14:50:35.425473 2870677 main.go:143] libmachine: Parsing certificate...
	I1121 14:50:35.425981 2870677 cli_runner.go:164] Run: docker network inspect auto-650772 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 14:50:35.450172 2870677 cli_runner.go:211] docker network inspect auto-650772 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 14:50:35.450253 2870677 network_create.go:284] running [docker network inspect auto-650772] to gather additional debugging logs...
	I1121 14:50:35.450271 2870677 cli_runner.go:164] Run: docker network inspect auto-650772
	W1121 14:50:35.470657 2870677 cli_runner.go:211] docker network inspect auto-650772 returned with exit code 1
	I1121 14:50:35.470690 2870677 network_create.go:287] error running [docker network inspect auto-650772]: docker network inspect auto-650772: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-650772 not found
	I1121 14:50:35.470705 2870677 network_create.go:289] output of [docker network inspect auto-650772]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-650772 not found
	
	** /stderr **
	I1121 14:50:35.470818 2870677 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:50:35.491598 2870677 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c13a3bee40ff IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:9f:8e:c6:2a:d6} reservation:<nil>}
	I1121 14:50:35.492096 2870677 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1859e8fd5584 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:c6:00:f6:5b:96} reservation:<nil>}
	I1121 14:50:35.492496 2870677 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-44a9b6062c4d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:b5:31:a5:3d:f0} reservation:<nil>}
	I1121 14:50:35.493201 2870677 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400198fc90}
	I1121 14:50:35.493227 2870677 network_create.go:124] attempt to create docker network auto-650772 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1121 14:50:35.493283 2870677 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-650772 auto-650772
	I1121 14:50:35.552274 2870677 network_create.go:108] docker network auto-650772 192.168.76.0/24 created
	I1121 14:50:35.552309 2870677 kic.go:121] calculated static IP "192.168.76.2" for the "auto-650772" container
	I1121 14:50:35.552397 2870677 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 14:50:35.569447 2870677 cli_runner.go:164] Run: docker volume create auto-650772 --label name.minikube.sigs.k8s.io=auto-650772 --label created_by.minikube.sigs.k8s.io=true
	I1121 14:50:35.588722 2870677 oci.go:103] Successfully created a docker volume auto-650772
	I1121 14:50:35.588828 2870677 cli_runner.go:164] Run: docker run --rm --name auto-650772-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-650772 --entrypoint /usr/bin/test -v auto-650772:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1121 14:50:36.278111 2870677 oci.go:107] Successfully prepared a docker volume auto-650772
	I1121 14:50:36.278183 2870677 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:50:36.278192 2870677 kic.go:194] Starting extracting preloaded images to volume ...
	I1121 14:50:36.278273 2870677 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v auto-650772:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1121 14:50:40.742919 2870677 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v auto-650772:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.46460709s)
	I1121 14:50:40.742973 2870677 kic.go:203] duration metric: took 4.464764936s to extract preloaded images to volume ...
	W1121 14:50:40.743108 2870677 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1121 14:50:40.743252 2870677 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 14:50:40.820636 2870677 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-650772 --name auto-650772 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-650772 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-650772 --network auto-650772 --ip 192.168.76.2 --volume auto-650772:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1121 14:50:41.150910 2870677 cli_runner.go:164] Run: docker container inspect auto-650772 --format={{.State.Running}}
	I1121 14:50:41.179079 2870677 cli_runner.go:164] Run: docker container inspect auto-650772 --format={{.State.Status}}
	I1121 14:50:41.211240 2870677 cli_runner.go:164] Run: docker exec auto-650772 stat /var/lib/dpkg/alternatives/iptables
	I1121 14:50:41.277937 2870677 oci.go:144] the created container "auto-650772" has a running status.
	I1121 14:50:41.277964 2870677 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/auto-650772/id_rsa...
	I1121 14:50:41.560307 2870677 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/auto-650772/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 14:50:41.585373 2870677 cli_runner.go:164] Run: docker container inspect auto-650772 --format={{.State.Status}}
	I1121 14:50:41.603760 2870677 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 14:50:41.603778 2870677 kic_runner.go:114] Args: [docker exec --privileged auto-650772 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 14:50:41.671791 2870677 cli_runner.go:164] Run: docker container inspect auto-650772 --format={{.State.Status}}
	I1121 14:50:41.702109 2870677 machine.go:94] provisionDockerMachine start ...
	I1121 14:50:41.702217 2870677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-650772
	I1121 14:50:41.729863 2870677 main.go:143] libmachine: Using SSH client type: native
	I1121 14:50:41.730193 2870677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 36766 <nil> <nil>}
	I1121 14:50:41.730202 2870677 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:50:41.730819 2870677 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40524->127.0.0.1:36766: read: connection reset by peer
	I1121 14:50:44.876628 2870677 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-650772
	
	I1121 14:50:44.876654 2870677 ubuntu.go:182] provisioning hostname "auto-650772"
	I1121 14:50:44.876740 2870677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-650772
	I1121 14:50:44.894497 2870677 main.go:143] libmachine: Using SSH client type: native
	I1121 14:50:44.894811 2870677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 36766 <nil> <nil>}
	I1121 14:50:44.894834 2870677 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-650772 && echo "auto-650772" | sudo tee /etc/hostname
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	dc94243895048       1611cd07b61d5       7 seconds ago        Running             busybox                   0                   0456a1d55983c       busybox                                                default
	34a9910078724       138784d87c9c5       13 seconds ago       Running             coredns                   0                   8bb7e0a112adc       coredns-66bc5c9577-6g67n                               kube-system
	8863e2663208f       ba04bb24b9575       13 seconds ago       Running             storage-provisioner       0                   118a4bd724072       storage-provisioner                                    kube-system
	d26cbda2f8de8       b1a8c6f707935       55 seconds ago       Running             kindnet-cni               0                   1c98bdbd34315       kindnet-l9ck4                                          kube-system
	af1bebac5b832       05baa95f5142d       55 seconds ago       Running             kube-proxy                0                   2c5eb800dfa85       kube-proxy-s4wjg                                       kube-system
	8e9e2caeab703       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   88c0a75d096b3       kube-scheduler-default-k8s-diff-port-219338            kube-system
	236365ed0b89e       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   b39a1fa9fdad1       kube-controller-manager-default-k8s-diff-port-219338   kube-system
	c90873fe60dcb       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   ede6fa7770337       kube-apiserver-default-k8s-diff-port-219338            kube-system
	ff190b41749d4       a1894772a478e       About a minute ago   Running             etcd                      0                   e83f2c49ef9b7       etcd-default-k8s-diff-port-219338                      kube-system
	
	
	==> containerd <==
	Nov 21 14:50:32 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:32.850693327Z" level=info msg="connecting to shim 8863e2663208fed91ac25fb86ba5070660b82feb717ecda35bc3e85e7c7ebcfe" address="unix:///run/containerd/s/b46935ae24d9a753067cd8464235ba783f8942100ffd0ea46d5b9e9caedf9089" protocol=ttrpc version=3
	Nov 21 14:50:32 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:32.898898305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6g67n,Uid:66f36548-8cb3-4eee-b5e7-abfe4d6a0195,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bb7e0a112adc33cc4617b5937a7571cdca32a5195155f9ddd1070c72a5364d7\""
	Nov 21 14:50:32 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:32.931985474Z" level=info msg="CreateContainer within sandbox \"8bb7e0a112adc33cc4617b5937a7571cdca32a5195155f9ddd1070c72a5364d7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 21 14:50:32 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:32.958598516Z" level=info msg="Container 34a9910078724e4286cb3b5efe631cb00e516b079c2f6e3bb2cd9ef354159e19: CDI devices from CRI Config.CDIDevices: []"
	Nov 21 14:50:32 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:32.971868591Z" level=info msg="CreateContainer within sandbox \"8bb7e0a112adc33cc4617b5937a7571cdca32a5195155f9ddd1070c72a5364d7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"34a9910078724e4286cb3b5efe631cb00e516b079c2f6e3bb2cd9ef354159e19\""
	Nov 21 14:50:32 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:32.972913057Z" level=info msg="StartContainer for \"34a9910078724e4286cb3b5efe631cb00e516b079c2f6e3bb2cd9ef354159e19\""
	Nov 21 14:50:32 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:32.973789233Z" level=info msg="connecting to shim 34a9910078724e4286cb3b5efe631cb00e516b079c2f6e3bb2cd9ef354159e19" address="unix:///run/containerd/s/cb36b0dccae93f62114dde92847d2cca5f828188d34e1c12f74a9bf7019b69cd" protocol=ttrpc version=3
	Nov 21 14:50:33 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:33.008751343Z" level=info msg="StartContainer for \"8863e2663208fed91ac25fb86ba5070660b82feb717ecda35bc3e85e7c7ebcfe\" returns successfully"
	Nov 21 14:50:33 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:33.136263073Z" level=info msg="StartContainer for \"34a9910078724e4286cb3b5efe631cb00e516b079c2f6e3bb2cd9ef354159e19\" returns successfully"
	Nov 21 14:50:36 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:36.442977313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:a18b0d57-5cfa-4219-961a-f30cbe26f965,Namespace:default,Attempt:0,}"
	Nov 21 14:50:36 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:36.532227450Z" level=info msg="connecting to shim 0456a1d55983c73b15cc214eba92e5445c10dab7ef36fb11ac58e9e1db320e1a" address="unix:///run/containerd/s/4494cacb352d21ad48189cd449b3edac4190b183c5475793a0455bb8e49371fb" namespace=k8s.io protocol=ttrpc version=3
	Nov 21 14:50:36 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:36.605118297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:a18b0d57-5cfa-4219-961a-f30cbe26f965,Namespace:default,Attempt:0,} returns sandbox id \"0456a1d55983c73b15cc214eba92e5445c10dab7ef36fb11ac58e9e1db320e1a\""
	Nov 21 14:50:36 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:36.607751115Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 21 14:50:38 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:38.606964494Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:50:38 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:38.609715004Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937184"
	Nov 21 14:50:38 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:38.612406635Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:50:38 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:38.616703332Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:50:38 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:38.617270659Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.006612214s"
	Nov 21 14:50:38 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:38.617896265Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 21 14:50:38 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:38.629789486Z" level=info msg="CreateContainer within sandbox \"0456a1d55983c73b15cc214eba92e5445c10dab7ef36fb11ac58e9e1db320e1a\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 21 14:50:38 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:38.644149244Z" level=info msg="Container dc94243895048aab8eac12746934f101128987a00c4b97bca12e5f8ec34f3421: CDI devices from CRI Config.CDIDevices: []"
	Nov 21 14:50:38 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:38.654545473Z" level=info msg="CreateContainer within sandbox \"0456a1d55983c73b15cc214eba92e5445c10dab7ef36fb11ac58e9e1db320e1a\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"dc94243895048aab8eac12746934f101128987a00c4b97bca12e5f8ec34f3421\""
	Nov 21 14:50:38 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:38.655278490Z" level=info msg="StartContainer for \"dc94243895048aab8eac12746934f101128987a00c4b97bca12e5f8ec34f3421\""
	Nov 21 14:50:38 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:38.656483699Z" level=info msg="connecting to shim dc94243895048aab8eac12746934f101128987a00c4b97bca12e5f8ec34f3421" address="unix:///run/containerd/s/4494cacb352d21ad48189cd449b3edac4190b183c5475793a0455bb8e49371fb" protocol=ttrpc version=3
	Nov 21 14:50:38 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:38.760507941Z" level=info msg="StartContainer for \"dc94243895048aab8eac12746934f101128987a00c4b97bca12e5f8ec34f3421\" returns successfully"
	
	
	==> coredns [34a9910078724e4286cb3b5efe631cb00e516b079c2f6e3bb2cd9ef354159e19] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50270 - 23985 "HINFO IN 1567346524217854883.1598740298227831180. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033628282s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-219338
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-219338
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=default-k8s-diff-port-219338
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_49_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:49:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-219338
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:50:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:50:32 +0000   Fri, 21 Nov 2025 14:49:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:50:32 +0000   Fri, 21 Nov 2025 14:49:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:50:32 +0000   Fri, 21 Nov 2025 14:49:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:50:32 +0000   Fri, 21 Nov 2025 14:50:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-219338
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                b7fd1707-d8d3-495c-9e74-3485b60b03c0
	  Boot ID:                    41b0e09d-5a9a-49c9-8980-dca608ba3fce
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-6g67n                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-default-k8s-diff-port-219338                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-l9ck4                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-default-k8s-diff-port-219338             250m (12%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-219338    200m (10%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-s4wjg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-default-k8s-diff-port-219338             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Warning  CgroupV1                 71s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  71s (x8 over 71s)  kubelet          Node default-k8s-diff-port-219338 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    71s (x8 over 71s)  kubelet          Node default-k8s-diff-port-219338 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     71s (x7 over 71s)  kubelet          Node default-k8s-diff-port-219338 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  71s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  61s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node default-k8s-diff-port-219338 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node default-k8s-diff-port-219338 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node default-k8s-diff-port-219338 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node default-k8s-diff-port-219338 event: Registered Node default-k8s-diff-port-219338 in Controller
	  Normal   NodeReady                14s                kubelet          Node default-k8s-diff-port-219338 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 13:02] overlayfs: idmapped layers are currently not supported
	[Nov21 13:03] overlayfs: idmapped layers are currently not supported
	[Nov21 13:06] overlayfs: idmapped layers are currently not supported
	[Nov21 13:08] overlayfs: idmapped layers are currently not supported
	[Nov21 13:09] overlayfs: idmapped layers are currently not supported
	[Nov21 13:10] overlayfs: idmapped layers are currently not supported
	[ +19.808801] overlayfs: idmapped layers are currently not supported
	[Nov21 13:11] overlayfs: idmapped layers are currently not supported
	[Nov21 13:12] overlayfs: idmapped layers are currently not supported
	[Nov21 13:13] overlayfs: idmapped layers are currently not supported
	[Nov21 13:14] overlayfs: idmapped layers are currently not supported
	[Nov21 13:15] overlayfs: idmapped layers are currently not supported
	[ +16.772572] overlayfs: idmapped layers are currently not supported
	[Nov21 13:16] overlayfs: idmapped layers are currently not supported
	[Nov21 13:17] overlayfs: idmapped layers are currently not supported
	[ +27.396777] overlayfs: idmapped layers are currently not supported
	[Nov21 13:18] overlayfs: idmapped layers are currently not supported
	[ +25.430119] overlayfs: idmapped layers are currently not supported
	[Nov21 13:19] overlayfs: idmapped layers are currently not supported
	[Nov21 13:20] overlayfs: idmapped layers are currently not supported
	[Nov21 13:21] overlayfs: idmapped layers are currently not supported
	[Nov21 13:22] overlayfs: idmapped layers are currently not supported
	[Nov21 13:23] overlayfs: idmapped layers are currently not supported
	[Nov21 13:24] overlayfs: idmapped layers are currently not supported
	[Nov21 13:55] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [ff190b41749d4b1c775eb7e2155bf86c6a1cabc60eaad113d5260a138ec7daae] <==
	{"level":"warn","ts":"2025-11-21T14:49:40.359990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.373514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.392528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.418756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.434245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.449724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.474886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.492940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.510857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.546010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.565414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.590677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.616225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.637342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.668820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.689490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.709794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.746463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.775509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.816296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.845673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.866401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.902108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.922029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:41.027994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60858","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:50:46 up 19:33,  0 user,  load average: 3.30, 3.63, 3.11
	Linux default-k8s-diff-port-219338 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d26cbda2f8de8f2cb0058c7438fbcd477c081f160b63c206ee4a2c103f58aade] <==
	I1121 14:49:51.930652       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:49:51.949125       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1121 14:49:51.949288       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:49:51.949306       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:49:51.949322       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:49:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:49:52.233307       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:49:52.233340       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:49:52.233351       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:49:52.233730       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1121 14:50:22.229524       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1121 14:50:22.230817       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1121 14:50:22.234291       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1121 14:50:22.234414       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1121 14:50:23.733436       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:50:23.733473       1 metrics.go:72] Registering metrics
	I1121 14:50:23.733524       1 controller.go:711] "Syncing nftables rules"
	I1121 14:50:32.235267       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:50:32.235319       1 main.go:301] handling current node
	I1121 14:50:42.229177       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:50:42.229331       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c90873fe60dcb064026fce4fb3a55bb07a32b04e36cec547b82f4c4421d0a85a] <==
	I1121 14:49:42.260141       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1121 14:49:42.278234       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 14:49:42.285137       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:49:42.316304       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:49:42.316918       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1121 14:49:42.378321       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:49:42.386592       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 14:49:42.839899       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1121 14:49:42.860378       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1121 14:49:42.860404       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:49:43.975681       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:49:44.064705       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:49:44.148146       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:49:44.207288       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1121 14:49:44.228552       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1121 14:49:44.230019       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 14:49:44.238137       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:49:45.158757       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 14:49:45.180057       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1121 14:49:45.209198       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1121 14:49:49.225934       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:49:49.255116       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:49:50.039110       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1121 14:49:50.129991       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1121 14:50:45.442043       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:55322: use of closed network connection
	
	
	==> kube-controller-manager [236365ed0b89eac33c6dcbdf426b5f9ce5ac85899aba3be307c923d96de4f8d6] <==
	I1121 14:49:49.207848       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1121 14:49:49.207887       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1121 14:49:49.207920       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1121 14:49:49.207925       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1121 14:49:49.207929       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1121 14:49:49.219806       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1121 14:49:49.229391       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-219338" podCIDRs=["10.244.0.0/24"]
	I1121 14:49:49.241612       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1121 14:49:49.241819       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1121 14:49:49.241961       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1121 14:49:49.241989       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 14:49:49.242382       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1121 14:49:49.242418       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1121 14:49:49.244519       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1121 14:49:49.253155       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1121 14:49:49.253301       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1121 14:49:49.253645       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 14:49:49.253707       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 14:49:49.253769       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1121 14:49:49.254259       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1121 14:49:49.257864       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1121 14:49:49.261082       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:49:49.269533       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:49:49.270874       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1121 14:50:34.205748       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [af1bebac5b832d228c4a45b9f1a0a1ba0aef88ec756c7611d430e1ed7b9856ec] <==
	I1121 14:49:51.942190       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:49:52.083028       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:49:52.191890       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:49:52.191928       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1121 14:49:52.192006       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:49:52.361385       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:49:52.361438       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:49:52.365762       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:49:52.366083       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:49:52.366119       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:49:52.368037       1 config.go:200] "Starting service config controller"
	I1121 14:49:52.368054       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:49:52.368070       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:49:52.368074       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:49:52.368087       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:49:52.368091       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:49:52.371814       1 config.go:309] "Starting node config controller"
	I1121 14:49:52.371830       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:49:52.371838       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:49:52.468643       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 14:49:52.468691       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 14:49:52.468730       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8e9e2caeab703d91901a33693a056e2c031273a1789496fdfee9409afb25376a] <==
	E1121 14:49:42.340560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 14:49:42.340614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 14:49:42.348403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 14:49:42.348705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 14:49:42.348854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 14:49:42.348635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 14:49:42.349137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 14:49:42.349268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 14:49:42.349384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 14:49:42.349430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 14:49:42.351908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 14:49:43.138495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:49:43.197812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 14:49:43.223692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 14:49:43.277384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 14:49:43.300616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 14:49:43.327519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 14:49:43.333576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 14:49:43.353839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1121 14:49:43.362189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 14:49:43.444091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 14:49:43.501282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 14:49:43.560180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 14:49:43.582367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1121 14:49:45.933540       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:49:46 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:49:46.566221    1478 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-default-k8s-diff-port-219338"
	Nov 21 14:49:46 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:49:46.566402    1478 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-default-k8s-diff-port-219338"
	Nov 21 14:49:46 default-k8s-diff-port-219338 kubelet[1478]: E1121 14:49:46.617314    1478 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-default-k8s-diff-port-219338\" already exists" pod="kube-system/kube-scheduler-default-k8s-diff-port-219338"
	Nov 21 14:49:46 default-k8s-diff-port-219338 kubelet[1478]: E1121 14:49:46.631410    1478 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-219338\" already exists" pod="kube-system/etcd-default-k8s-diff-port-219338"
	Nov 21 14:49:49 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:49:49.251994    1478 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 21 14:49:49 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:49:49.258203    1478 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 21 14:49:50 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:49:50.366847    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5b6dee86-9d86-4845-bdde-7a7b7e9ca99d-cni-cfg\") pod \"kindnet-l9ck4\" (UID: \"5b6dee86-9d86-4845-bdde-7a7b7e9ca99d\") " pod="kube-system/kindnet-l9ck4"
	Nov 21 14:49:50 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:49:50.366913    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b6dee86-9d86-4845-bdde-7a7b7e9ca99d-xtables-lock\") pod \"kindnet-l9ck4\" (UID: \"5b6dee86-9d86-4845-bdde-7a7b7e9ca99d\") " pod="kube-system/kindnet-l9ck4"
	Nov 21 14:49:50 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:49:50.366953    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b6dee86-9d86-4845-bdde-7a7b7e9ca99d-lib-modules\") pod \"kindnet-l9ck4\" (UID: \"5b6dee86-9d86-4845-bdde-7a7b7e9ca99d\") " pod="kube-system/kindnet-l9ck4"
	Nov 21 14:49:50 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:49:50.366973    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d73e057f-eb19-4beb-bbba-590064218611-xtables-lock\") pod \"kube-proxy-s4wjg\" (UID: \"d73e057f-eb19-4beb-bbba-590064218611\") " pod="kube-system/kube-proxy-s4wjg"
	Nov 21 14:49:50 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:49:50.366992    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75fd6\" (UniqueName: \"kubernetes.io/projected/d73e057f-eb19-4beb-bbba-590064218611-kube-api-access-75fd6\") pod \"kube-proxy-s4wjg\" (UID: \"d73e057f-eb19-4beb-bbba-590064218611\") " pod="kube-system/kube-proxy-s4wjg"
	Nov 21 14:49:50 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:49:50.367020    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d73e057f-eb19-4beb-bbba-590064218611-kube-proxy\") pod \"kube-proxy-s4wjg\" (UID: \"d73e057f-eb19-4beb-bbba-590064218611\") " pod="kube-system/kube-proxy-s4wjg"
	Nov 21 14:49:50 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:49:50.367041    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d73e057f-eb19-4beb-bbba-590064218611-lib-modules\") pod \"kube-proxy-s4wjg\" (UID: \"d73e057f-eb19-4beb-bbba-590064218611\") " pod="kube-system/kube-proxy-s4wjg"
	Nov 21 14:49:50 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:49:50.367060    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn9qr\" (UniqueName: \"kubernetes.io/projected/5b6dee86-9d86-4845-bdde-7a7b7e9ca99d-kube-api-access-vn9qr\") pod \"kindnet-l9ck4\" (UID: \"5b6dee86-9d86-4845-bdde-7a7b7e9ca99d\") " pod="kube-system/kindnet-l9ck4"
	Nov 21 14:49:50 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:49:50.570506    1478 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 21 14:49:52 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:49:52.659960    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-l9ck4" podStartSLOduration=2.659940744 podStartE2EDuration="2.659940744s" podCreationTimestamp="2025-11-21 14:49:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:49:52.637768115 +0000 UTC m=+7.585059792" watchObservedRunningTime="2025-11-21 14:49:52.659940744 +0000 UTC m=+7.607232397"
	Nov 21 14:49:53 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:49:53.801749    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s4wjg" podStartSLOduration=3.801671756 podStartE2EDuration="3.801671756s" podCreationTimestamp="2025-11-21 14:49:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:49:52.68807686 +0000 UTC m=+7.635368513" watchObservedRunningTime="2025-11-21 14:49:53.801671756 +0000 UTC m=+8.748963409"
	Nov 21 14:50:32 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:50:32.250096    1478 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 21 14:50:32 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:50:32.461782    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/dc9557e1-c6ed-40d6-b415-f6273f55c51b-tmp\") pod \"storage-provisioner\" (UID: \"dc9557e1-c6ed-40d6-b415-f6273f55c51b\") " pod="kube-system/storage-provisioner"
	Nov 21 14:50:32 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:50:32.461884    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvbb7\" (UniqueName: \"kubernetes.io/projected/dc9557e1-c6ed-40d6-b415-f6273f55c51b-kube-api-access-vvbb7\") pod \"storage-provisioner\" (UID: \"dc9557e1-c6ed-40d6-b415-f6273f55c51b\") " pod="kube-system/storage-provisioner"
	Nov 21 14:50:32 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:50:32.461928    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66f36548-8cb3-4eee-b5e7-abfe4d6a0195-config-volume\") pod \"coredns-66bc5c9577-6g67n\" (UID: \"66f36548-8cb3-4eee-b5e7-abfe4d6a0195\") " pod="kube-system/coredns-66bc5c9577-6g67n"
	Nov 21 14:50:32 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:50:32.461966    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96vm9\" (UniqueName: \"kubernetes.io/projected/66f36548-8cb3-4eee-b5e7-abfe4d6a0195-kube-api-access-96vm9\") pod \"coredns-66bc5c9577-6g67n\" (UID: \"66f36548-8cb3-4eee-b5e7-abfe4d6a0195\") " pod="kube-system/coredns-66bc5c9577-6g67n"
	Nov 21 14:50:33 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:50:33.740602    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6g67n" podStartSLOduration=43.740584105 podStartE2EDuration="43.740584105s" podCreationTimestamp="2025-11-21 14:49:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:50:33.740580889 +0000 UTC m=+48.687872542" watchObservedRunningTime="2025-11-21 14:50:33.740584105 +0000 UTC m=+48.687875758"
	Nov 21 14:50:36 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:50:36.129098    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=44.129079089 podStartE2EDuration="44.129079089s" podCreationTimestamp="2025-11-21 14:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:50:33.80377567 +0000 UTC m=+48.751067323" watchObservedRunningTime="2025-11-21 14:50:36.129079089 +0000 UTC m=+51.076370742"
	Nov 21 14:50:36 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:50:36.206582    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thvsz\" (UniqueName: \"kubernetes.io/projected/a18b0d57-5cfa-4219-961a-f30cbe26f965-kube-api-access-thvsz\") pod \"busybox\" (UID: \"a18b0d57-5cfa-4219-961a-f30cbe26f965\") " pod="default/busybox"
	
	
	==> storage-provisioner [8863e2663208fed91ac25fb86ba5070660b82feb717ecda35bc3e85e7c7ebcfe] <==
	I1121 14:50:33.030356       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1121 14:50:33.050712       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 14:50:33.050777       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1121 14:50:33.054951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:50:33.072546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:50:33.073391       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 14:50:33.077000       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-219338_d3d78954-0b96-40b0-956b-b1b8a98482be!
	I1121 14:50:33.085393       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f2e79590-cffa-43b0-82e0-17087e2e2242", APIVersion:"v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-219338_d3d78954-0b96-40b0-956b-b1b8a98482be became leader
	W1121 14:50:33.087330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:50:33.112554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:50:33.183438       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-219338_d3d78954-0b96-40b0-956b-b1b8a98482be!
	W1121 14:50:35.116221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:50:35.280811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:50:37.285418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:50:37.294598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:50:39.298443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:50:39.320971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:50:41.324737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:50:41.335625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:50:43.339547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:50:43.344231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:50:45.362330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:50:45.375108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-219338 -n default-k8s-diff-port-219338
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-219338 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-219338
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-219338:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f206fc018ba2f9ec3dc45af716d2148b75dfb158831ec03a387c554a1752aade",
	        "Created": "2025-11-21T14:49:15.38263101Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2860971,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:49:15.443616996Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/f206fc018ba2f9ec3dc45af716d2148b75dfb158831ec03a387c554a1752aade/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f206fc018ba2f9ec3dc45af716d2148b75dfb158831ec03a387c554a1752aade/hostname",
	        "HostsPath": "/var/lib/docker/containers/f206fc018ba2f9ec3dc45af716d2148b75dfb158831ec03a387c554a1752aade/hosts",
	        "LogPath": "/var/lib/docker/containers/f206fc018ba2f9ec3dc45af716d2148b75dfb158831ec03a387c554a1752aade/f206fc018ba2f9ec3dc45af716d2148b75dfb158831ec03a387c554a1752aade-json.log",
	        "Name": "/default-k8s-diff-port-219338",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-219338:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-219338",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f206fc018ba2f9ec3dc45af716d2148b75dfb158831ec03a387c554a1752aade",
	                "LowerDir": "/var/lib/docker/overlay2/202aec692d493c87f1b4bea263011ab454a4e0cd9a8cbcec95157ab17dd9d92b-init/diff:/var/lib/docker/overlay2/789a4b9f9866e585907664b1eaf98d94438dbf699e0511f3ca5ba5ea682b005e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/202aec692d493c87f1b4bea263011ab454a4e0cd9a8cbcec95157ab17dd9d92b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/202aec692d493c87f1b4bea263011ab454a4e0cd9a8cbcec95157ab17dd9d92b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/202aec692d493c87f1b4bea263011ab454a4e0cd9a8cbcec95157ab17dd9d92b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-219338",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-219338/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-219338",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-219338",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-219338",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3a79f8634a9793006f40030934ff9c7c16885cb3c8909b9a75e467005269452a",
	            "SandboxKey": "/var/run/docker/netns/3a79f8634a97",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36750"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36751"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36754"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36752"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36753"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-219338": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:41:9e:93:b6:cf",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "68078880d2983097a6f8a3152596472180ac5d76d57269e8f9db42495c53210d",
	                    "EndpointID": "0f8cb31d5e3095b1cd9c6a6f36bf3d7c72e8fb27df33849940f2ce58ab5fb334",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-219338",
	                        "f206fc018ba2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-219338 -n default-k8s-diff-port-219338
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-219338 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-219338 logs -n 25: (1.629505449s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ addons  │ enable dashboard -p embed-certs-695324 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-695324           │ jenkins │ v1.37.0 │ 21 Nov 25 14:48 UTC │ 21 Nov 25 14:48 UTC │
	│ start   │ -p embed-certs-695324 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-695324           │ jenkins │ v1.37.0 │ 21 Nov 25 14:48 UTC │ 21 Nov 25 14:49 UTC │
	│ image   │ no-preload-208006 image list --format=json                                                                                                                                                                                                          │ no-preload-208006            │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:49 UTC │
	│ pause   │ -p no-preload-208006 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-208006            │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:49 UTC │
	│ unpause │ -p no-preload-208006 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-208006            │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:49 UTC │
	│ delete  │ -p no-preload-208006                                                                                                                                                                                                                                │ no-preload-208006            │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:49 UTC │
	│ delete  │ -p no-preload-208006                                                                                                                                                                                                                                │ no-preload-208006            │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:49 UTC │
	│ delete  │ -p disable-driver-mounts-422442                                                                                                                                                                                                                     │ disable-driver-mounts-422442 │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:49 UTC │
	│ start   │ -p default-k8s-diff-port-219338 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-219338 │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:50 UTC │
	│ image   │ embed-certs-695324 image list --format=json                                                                                                                                                                                                         │ embed-certs-695324           │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:49 UTC │
	│ pause   │ -p embed-certs-695324 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-695324           │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:49 UTC │
	│ unpause │ -p embed-certs-695324 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-695324           │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:49 UTC │
	│ delete  │ -p embed-certs-695324                                                                                                                                                                                                                               │ embed-certs-695324           │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:49 UTC │
	│ delete  │ -p embed-certs-695324                                                                                                                                                                                                                               │ embed-certs-695324           │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:49 UTC │
	│ start   │ -p newest-cni-921069 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-921069            │ jenkins │ v1.37.0 │ 21 Nov 25 14:49 UTC │ 21 Nov 25 14:50 UTC │
	│ addons  │ enable metrics-server -p newest-cni-921069 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-921069            │ jenkins │ v1.37.0 │ 21 Nov 25 14:50 UTC │ 21 Nov 25 14:50 UTC │
	│ stop    │ -p newest-cni-921069 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-921069            │ jenkins │ v1.37.0 │ 21 Nov 25 14:50 UTC │ 21 Nov 25 14:50 UTC │
	│ addons  │ enable dashboard -p newest-cni-921069 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-921069            │ jenkins │ v1.37.0 │ 21 Nov 25 14:50 UTC │ 21 Nov 25 14:50 UTC │
	│ start   │ -p newest-cni-921069 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-921069            │ jenkins │ v1.37.0 │ 21 Nov 25 14:50 UTC │ 21 Nov 25 14:50 UTC │
	│ image   │ newest-cni-921069 image list --format=json                                                                                                                                                                                                          │ newest-cni-921069            │ jenkins │ v1.37.0 │ 21 Nov 25 14:50 UTC │ 21 Nov 25 14:50 UTC │
	│ pause   │ -p newest-cni-921069 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-921069            │ jenkins │ v1.37.0 │ 21 Nov 25 14:50 UTC │ 21 Nov 25 14:50 UTC │
	│ unpause │ -p newest-cni-921069 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-921069            │ jenkins │ v1.37.0 │ 21 Nov 25 14:50 UTC │ 21 Nov 25 14:50 UTC │
	│ delete  │ -p newest-cni-921069                                                                                                                                                                                                                                │ newest-cni-921069            │ jenkins │ v1.37.0 │ 21 Nov 25 14:50 UTC │ 21 Nov 25 14:50 UTC │
	│ delete  │ -p newest-cni-921069                                                                                                                                                                                                                                │ newest-cni-921069            │ jenkins │ v1.37.0 │ 21 Nov 25 14:50 UTC │ 21 Nov 25 14:50 UTC │
	│ start   │ -p auto-650772 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                       │ auto-650772                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:50 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:50:35
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:50:35.017849 2870677 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:50:35.018076 2870677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:50:35.018091 2870677 out.go:374] Setting ErrFile to fd 2...
	I1121 14:50:35.018097 2870677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:50:35.018386 2870677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-2633933/.minikube/bin
	I1121 14:50:35.018896 2870677 out.go:368] Setting JSON to false
	I1121 14:50:35.019967 2870677 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":70383,"bootTime":1763666252,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1121 14:50:35.020045 2870677 start.go:143] virtualization:  
	I1121 14:50:35.023601 2870677 out.go:179] * [auto-650772] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 14:50:35.027724 2870677 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:50:35.027837 2870677 notify.go:221] Checking for updates...
	I1121 14:50:35.034110 2870677 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:50:35.037179 2870677 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-2633933/kubeconfig
	I1121 14:50:35.040221 2870677 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-2633933/.minikube
	I1121 14:50:35.043222 2870677 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 14:50:35.046269 2870677 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:50:35.049837 2870677 config.go:182] Loaded profile config "default-k8s-diff-port-219338": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:50:35.049959 2870677 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:50:35.095921 2870677 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 14:50:35.096052 2870677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:50:35.253822 2870677 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-21 14:50:35.243614474 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:50:35.253932 2870677 docker.go:319] overlay module found
	I1121 14:50:35.257137 2870677 out.go:179] * Using the docker driver based on user configuration
	I1121 14:50:35.259985 2870677 start.go:309] selected driver: docker
	I1121 14:50:35.260013 2870677 start.go:930] validating driver "docker" against <nil>
	I1121 14:50:35.260029 2870677 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:50:35.260784 2870677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:50:35.370907 2870677 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-21 14:50:35.359825401 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:50:35.371077 2870677 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 14:50:35.371331 2870677 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:50:35.374320 2870677 out.go:179] * Using Docker driver with root privileges
	I1121 14:50:35.377237 2870677 cni.go:84] Creating CNI manager for ""
	I1121 14:50:35.377340 2870677 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 14:50:35.377355 2870677 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 14:50:35.377474 2870677 start.go:353] cluster config:
	{Name:auto-650772 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-650772 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:con
tainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:50:35.380705 2870677 out.go:179] * Starting "auto-650772" primary control-plane node in "auto-650772" cluster
	I1121 14:50:35.383519 2870677 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1121 14:50:35.386462 2870677 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:50:35.389423 2870677 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:50:35.389488 2870677 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1121 14:50:35.389502 2870677 cache.go:65] Caching tarball of preloaded images
	I1121 14:50:35.389656 2870677 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:50:35.389909 2870677 preload.go:238] Found /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1121 14:50:35.389935 2870677 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1121 14:50:35.390904 2870677 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/auto-650772/config.json ...
	I1121 14:50:35.390943 2870677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/auto-650772/config.json: {Name:mk82699e344dc78b7be36099eaafc18000387f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:50:35.418949 2870677 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:50:35.418975 2870677 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:50:35.418995 2870677 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:50:35.419022 2870677 start.go:360] acquireMachinesLock for auto-650772: {Name:mk277c1ac3cd64b70ae58f78f0535c6ce70f5ac9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:50:35.419169 2870677 start.go:364] duration metric: took 118.635µs to acquireMachinesLock for "auto-650772"
	I1121 14:50:35.419211 2870677 start.go:93] Provisioning new machine with config: &{Name:auto-650772 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-650772 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1121 14:50:35.419306 2870677 start.go:125] createHost starting for "" (driver="docker")
	I1121 14:50:34.686282 2860542 pod_ready.go:83] waiting for pod "kube-proxy-s4wjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:50:35.085832 2860542 pod_ready.go:94] pod "kube-proxy-s4wjg" is "Ready"
	I1121 14:50:35.085866 2860542 pod_ready.go:86] duration metric: took 399.557154ms for pod "kube-proxy-s4wjg" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:50:35.302213 2860542 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-219338" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:50:35.685006 2860542 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-219338" is "Ready"
	I1121 14:50:35.685044 2860542 pod_ready.go:86] duration metric: took 382.803857ms for pod "kube-scheduler-default-k8s-diff-port-219338" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:50:35.685058 2860542 pod_ready.go:40] duration metric: took 1.605191747s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:50:35.779448 2860542 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1121 14:50:35.784433 2860542 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-219338" cluster and "default" namespace by default
	I1121 14:50:35.424635 2870677 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1121 14:50:35.424998 2870677 start.go:159] libmachine.API.Create for "auto-650772" (driver="docker")
	I1121 14:50:35.425123 2870677 client.go:173] LocalClient.Create starting
	I1121 14:50:35.425263 2870677 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem
	I1121 14:50:35.425320 2870677 main.go:143] libmachine: Decoding PEM data...
	I1121 14:50:35.425342 2870677 main.go:143] libmachine: Parsing certificate...
	I1121 14:50:35.425416 2870677 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem
	I1121 14:50:35.425456 2870677 main.go:143] libmachine: Decoding PEM data...
	I1121 14:50:35.425473 2870677 main.go:143] libmachine: Parsing certificate...
	I1121 14:50:35.425981 2870677 cli_runner.go:164] Run: docker network inspect auto-650772 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 14:50:35.450172 2870677 cli_runner.go:211] docker network inspect auto-650772 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 14:50:35.450253 2870677 network_create.go:284] running [docker network inspect auto-650772] to gather additional debugging logs...
	I1121 14:50:35.450271 2870677 cli_runner.go:164] Run: docker network inspect auto-650772
	W1121 14:50:35.470657 2870677 cli_runner.go:211] docker network inspect auto-650772 returned with exit code 1
	I1121 14:50:35.470690 2870677 network_create.go:287] error running [docker network inspect auto-650772]: docker network inspect auto-650772: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-650772 not found
	I1121 14:50:35.470705 2870677 network_create.go:289] output of [docker network inspect auto-650772]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-650772 not found
	
	** /stderr **
	I1121 14:50:35.470818 2870677 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:50:35.491598 2870677 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c13a3bee40ff IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:9f:8e:c6:2a:d6} reservation:<nil>}
	I1121 14:50:35.492096 2870677 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1859e8fd5584 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:c6:00:f6:5b:96} reservation:<nil>}
	I1121 14:50:35.492496 2870677 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-44a9b6062c4d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:b5:31:a5:3d:f0} reservation:<nil>}
	I1121 14:50:35.493201 2870677 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400198fc90}
	I1121 14:50:35.493227 2870677 network_create.go:124] attempt to create docker network auto-650772 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1121 14:50:35.493283 2870677 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-650772 auto-650772
	I1121 14:50:35.552274 2870677 network_create.go:108] docker network auto-650772 192.168.76.0/24 created
	I1121 14:50:35.552309 2870677 kic.go:121] calculated static IP "192.168.76.2" for the "auto-650772" container
	I1121 14:50:35.552397 2870677 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 14:50:35.569447 2870677 cli_runner.go:164] Run: docker volume create auto-650772 --label name.minikube.sigs.k8s.io=auto-650772 --label created_by.minikube.sigs.k8s.io=true
	I1121 14:50:35.588722 2870677 oci.go:103] Successfully created a docker volume auto-650772
	I1121 14:50:35.588828 2870677 cli_runner.go:164] Run: docker run --rm --name auto-650772-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-650772 --entrypoint /usr/bin/test -v auto-650772:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1121 14:50:36.278111 2870677 oci.go:107] Successfully prepared a docker volume auto-650772
	I1121 14:50:36.278183 2870677 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 14:50:36.278192 2870677 kic.go:194] Starting extracting preloaded images to volume ...
	I1121 14:50:36.278273 2870677 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v auto-650772:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1121 14:50:40.742919 2870677 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v auto-650772:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.46460709s)
	I1121 14:50:40.742973 2870677 kic.go:203] duration metric: took 4.464764936s to extract preloaded images to volume ...
	W1121 14:50:40.743108 2870677 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1121 14:50:40.743252 2870677 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 14:50:40.820636 2870677 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-650772 --name auto-650772 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-650772 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-650772 --network auto-650772 --ip 192.168.76.2 --volume auto-650772:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1121 14:50:41.150910 2870677 cli_runner.go:164] Run: docker container inspect auto-650772 --format={{.State.Running}}
	I1121 14:50:41.179079 2870677 cli_runner.go:164] Run: docker container inspect auto-650772 --format={{.State.Status}}
	I1121 14:50:41.211240 2870677 cli_runner.go:164] Run: docker exec auto-650772 stat /var/lib/dpkg/alternatives/iptables
	I1121 14:50:41.277937 2870677 oci.go:144] the created container "auto-650772" has a running status.
	I1121 14:50:41.277964 2870677 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/auto-650772/id_rsa...
	I1121 14:50:41.560307 2870677 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/auto-650772/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 14:50:41.585373 2870677 cli_runner.go:164] Run: docker container inspect auto-650772 --format={{.State.Status}}
	I1121 14:50:41.603760 2870677 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 14:50:41.603778 2870677 kic_runner.go:114] Args: [docker exec --privileged auto-650772 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 14:50:41.671791 2870677 cli_runner.go:164] Run: docker container inspect auto-650772 --format={{.State.Status}}
	I1121 14:50:41.702109 2870677 machine.go:94] provisionDockerMachine start ...
	I1121 14:50:41.702217 2870677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-650772
	I1121 14:50:41.729863 2870677 main.go:143] libmachine: Using SSH client type: native
	I1121 14:50:41.730193 2870677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 36766 <nil> <nil>}
	I1121 14:50:41.730202 2870677 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:50:41.730819 2870677 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40524->127.0.0.1:36766: read: connection reset by peer
	I1121 14:50:44.876628 2870677 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-650772
	
	I1121 14:50:44.876654 2870677 ubuntu.go:182] provisioning hostname "auto-650772"
	I1121 14:50:44.876740 2870677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-650772
	I1121 14:50:44.894497 2870677 main.go:143] libmachine: Using SSH client type: native
	I1121 14:50:44.894811 2870677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 36766 <nil> <nil>}
	I1121 14:50:44.894834 2870677 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-650772 && echo "auto-650772" | sudo tee /etc/hostname
	I1121 14:50:45.094307 2870677 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-650772
	
	I1121 14:50:45.094468 2870677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-650772
	I1121 14:50:45.121293 2870677 main.go:143] libmachine: Using SSH client type: native
	I1121 14:50:45.121639 2870677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 36766 <nil> <nil>}
	I1121 14:50:45.121663 2870677 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-650772' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-650772/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-650772' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:50:45.397393 2870677 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:50:45.397420 2870677 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-2633933/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-2633933/.minikube}
	I1121 14:50:45.397446 2870677 ubuntu.go:190] setting up certificates
	I1121 14:50:45.397458 2870677 provision.go:84] configureAuth start
	I1121 14:50:45.397531 2870677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-650772
	I1121 14:50:45.425413 2870677 provision.go:143] copyHostCerts
	I1121 14:50:45.425492 2870677 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.pem, removing ...
	I1121 14:50:45.425503 2870677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.pem
	I1121 14:50:45.425586 2870677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.pem (1082 bytes)
	I1121 14:50:45.425673 2870677 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-2633933/.minikube/cert.pem, removing ...
	I1121 14:50:45.425678 2870677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-2633933/.minikube/cert.pem
	I1121 14:50:45.425704 2870677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-2633933/.minikube/cert.pem (1123 bytes)
	I1121 14:50:45.425756 2870677 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-2633933/.minikube/key.pem, removing ...
	I1121 14:50:45.425760 2870677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-2633933/.minikube/key.pem
	I1121 14:50:45.425783 2870677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-2633933/.minikube/key.pem (1679 bytes)
	I1121 14:50:45.425827 2870677 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca-key.pem org=jenkins.auto-650772 san=[127.0.0.1 192.168.76.2 auto-650772 localhost minikube]
	I1121 14:50:45.750355 2870677 provision.go:177] copyRemoteCerts
	I1121 14:50:45.750514 2870677 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:50:45.750578 2870677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-650772
	I1121 14:50:45.770651 2870677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36766 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/auto-650772/id_rsa Username:docker}
	I1121 14:50:45.890691 2870677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1121 14:50:45.913679 2870677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1121 14:50:45.959918 2870677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1121 14:50:45.988094 2870677 provision.go:87] duration metric: took 590.620632ms to configureAuth
	I1121 14:50:45.988127 2870677 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:50:45.988316 2870677 config.go:182] Loaded profile config "auto-650772": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:50:45.988330 2870677 machine.go:97] duration metric: took 4.286203452s to provisionDockerMachine
	I1121 14:50:45.988338 2870677 client.go:176] duration metric: took 10.563200064s to LocalClient.Create
	I1121 14:50:45.988350 2870677 start.go:167] duration metric: took 10.563355185s to libmachine.API.Create "auto-650772"
	I1121 14:50:45.988361 2870677 start.go:293] postStartSetup for "auto-650772" (driver="docker")
	I1121 14:50:45.988370 2870677 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:50:45.988431 2870677 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:50:45.988475 2870677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-650772
	I1121 14:50:46.015471 2870677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36766 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/auto-650772/id_rsa Username:docker}
	I1121 14:50:46.128885 2870677 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:50:46.133516 2870677 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:50:46.133549 2870677 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:50:46.133561 2870677 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-2633933/.minikube/addons for local assets ...
	I1121 14:50:46.133611 2870677 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-2633933/.minikube/files for local assets ...
	I1121 14:50:46.133699 2870677 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem -> 26357852.pem in /etc/ssl/certs
	I1121 14:50:46.133801 2870677 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:50:46.144171 2870677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/ssl/certs/26357852.pem --> /etc/ssl/certs/26357852.pem (1708 bytes)
	I1121 14:50:46.173349 2870677 start.go:296] duration metric: took 184.973785ms for postStartSetup
	I1121 14:50:46.173745 2870677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-650772
	I1121 14:50:46.201274 2870677 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/auto-650772/config.json ...
	I1121 14:50:46.201560 2870677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:50:46.201613 2870677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-650772
	I1121 14:50:46.222781 2870677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36766 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/auto-650772/id_rsa Username:docker}
	I1121 14:50:46.322674 2870677 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:50:46.329924 2870677 start.go:128] duration metric: took 10.910600976s to createHost
	I1121 14:50:46.329990 2870677 start.go:83] releasing machines lock for "auto-650772", held for 10.910798969s
	I1121 14:50:46.330097 2870677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-650772
	I1121 14:50:46.351784 2870677 ssh_runner.go:195] Run: cat /version.json
	I1121 14:50:46.351837 2870677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-650772
	I1121 14:50:46.351894 2870677 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:50:46.351971 2870677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-650772
	I1121 14:50:46.373930 2870677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36766 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/auto-650772/id_rsa Username:docker}
	I1121 14:50:46.394077 2870677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36766 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/auto-650772/id_rsa Username:docker}
	I1121 14:50:46.586917 2870677 ssh_runner.go:195] Run: systemctl --version
	I1121 14:50:46.593712 2870677 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:50:46.598936 2870677 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:50:46.599005 2870677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:50:46.637767 2870677 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1121 14:50:46.637789 2870677 start.go:496] detecting cgroup driver to use...
	I1121 14:50:46.637823 2870677 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 14:50:46.637873 2870677 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1121 14:50:46.658215 2870677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1121 14:50:46.673905 2870677 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:50:46.674018 2870677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:50:46.693429 2870677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:50:46.712364 2870677 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:50:46.868019 2870677 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:50:47.023699 2870677 docker.go:234] disabling docker service ...
	I1121 14:50:47.023776 2870677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:50:47.049017 2870677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:50:47.063940 2870677 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:50:47.226643 2870677 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:50:47.363980 2870677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:50:47.384133 2870677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:50:47.413108 2870677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1121 14:50:47.432911 2870677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1121 14:50:47.444108 2870677 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1121 14:50:47.444171 2870677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1121 14:50:47.455030 2870677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:50:47.464493 2870677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1121 14:50:47.476786 2870677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1121 14:50:47.486402 2870677 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:50:47.494950 2870677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1121 14:50:47.504150 2870677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1121 14:50:47.515177 2870677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1121 14:50:47.530692 2870677 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:50:47.540913 2870677 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:50:47.548778 2870677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:50:47.696048 2870677 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1121 14:50:47.871443 2870677 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1121 14:50:47.871505 2870677 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1121 14:50:47.876633 2870677 start.go:564] Will wait 60s for crictl version
	I1121 14:50:47.876707 2870677 ssh_runner.go:195] Run: which crictl
	I1121 14:50:47.893859 2870677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:50:47.930753 2870677 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1121 14:50:47.930823 2870677 ssh_runner.go:195] Run: containerd --version
	I1121 14:50:47.956605 2870677 ssh_runner.go:195] Run: containerd --version
	I1121 14:50:47.986543 2870677 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	dc94243895048       1611cd07b61d5       10 seconds ago       Running             busybox                   0                   0456a1d55983c       busybox                                                default
	34a9910078724       138784d87c9c5       16 seconds ago       Running             coredns                   0                   8bb7e0a112adc       coredns-66bc5c9577-6g67n                               kube-system
	8863e2663208f       ba04bb24b9575       16 seconds ago       Running             storage-provisioner       0                   118a4bd724072       storage-provisioner                                    kube-system
	d26cbda2f8de8       b1a8c6f707935       57 seconds ago       Running             kindnet-cni               0                   1c98bdbd34315       kindnet-l9ck4                                          kube-system
	af1bebac5b832       05baa95f5142d       57 seconds ago       Running             kube-proxy                0                   2c5eb800dfa85       kube-proxy-s4wjg                                       kube-system
	8e9e2caeab703       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   88c0a75d096b3       kube-scheduler-default-k8s-diff-port-219338            kube-system
	236365ed0b89e       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   b39a1fa9fdad1       kube-controller-manager-default-k8s-diff-port-219338   kube-system
	c90873fe60dcb       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   ede6fa7770337       kube-apiserver-default-k8s-diff-port-219338            kube-system
	ff190b41749d4       a1894772a478e       About a minute ago   Running             etcd                      0                   e83f2c49ef9b7       etcd-default-k8s-diff-port-219338                      kube-system
	
	
	==> containerd <==
	Nov 21 14:50:32 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:32.850693327Z" level=info msg="connecting to shim 8863e2663208fed91ac25fb86ba5070660b82feb717ecda35bc3e85e7c7ebcfe" address="unix:///run/containerd/s/b46935ae24d9a753067cd8464235ba783f8942100ffd0ea46d5b9e9caedf9089" protocol=ttrpc version=3
	Nov 21 14:50:32 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:32.898898305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6g67n,Uid:66f36548-8cb3-4eee-b5e7-abfe4d6a0195,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bb7e0a112adc33cc4617b5937a7571cdca32a5195155f9ddd1070c72a5364d7\""
	Nov 21 14:50:32 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:32.931985474Z" level=info msg="CreateContainer within sandbox \"8bb7e0a112adc33cc4617b5937a7571cdca32a5195155f9ddd1070c72a5364d7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 21 14:50:32 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:32.958598516Z" level=info msg="Container 34a9910078724e4286cb3b5efe631cb00e516b079c2f6e3bb2cd9ef354159e19: CDI devices from CRI Config.CDIDevices: []"
	Nov 21 14:50:32 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:32.971868591Z" level=info msg="CreateContainer within sandbox \"8bb7e0a112adc33cc4617b5937a7571cdca32a5195155f9ddd1070c72a5364d7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"34a9910078724e4286cb3b5efe631cb00e516b079c2f6e3bb2cd9ef354159e19\""
	Nov 21 14:50:32 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:32.972913057Z" level=info msg="StartContainer for \"34a9910078724e4286cb3b5efe631cb00e516b079c2f6e3bb2cd9ef354159e19\""
	Nov 21 14:50:32 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:32.973789233Z" level=info msg="connecting to shim 34a9910078724e4286cb3b5efe631cb00e516b079c2f6e3bb2cd9ef354159e19" address="unix:///run/containerd/s/cb36b0dccae93f62114dde92847d2cca5f828188d34e1c12f74a9bf7019b69cd" protocol=ttrpc version=3
	Nov 21 14:50:33 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:33.008751343Z" level=info msg="StartContainer for \"8863e2663208fed91ac25fb86ba5070660b82feb717ecda35bc3e85e7c7ebcfe\" returns successfully"
	Nov 21 14:50:33 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:33.136263073Z" level=info msg="StartContainer for \"34a9910078724e4286cb3b5efe631cb00e516b079c2f6e3bb2cd9ef354159e19\" returns successfully"
	Nov 21 14:50:36 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:36.442977313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:a18b0d57-5cfa-4219-961a-f30cbe26f965,Namespace:default,Attempt:0,}"
	Nov 21 14:50:36 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:36.532227450Z" level=info msg="connecting to shim 0456a1d55983c73b15cc214eba92e5445c10dab7ef36fb11ac58e9e1db320e1a" address="unix:///run/containerd/s/4494cacb352d21ad48189cd449b3edac4190b183c5475793a0455bb8e49371fb" namespace=k8s.io protocol=ttrpc version=3
	Nov 21 14:50:36 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:36.605118297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:a18b0d57-5cfa-4219-961a-f30cbe26f965,Namespace:default,Attempt:0,} returns sandbox id \"0456a1d55983c73b15cc214eba92e5445c10dab7ef36fb11ac58e9e1db320e1a\""
	Nov 21 14:50:36 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:36.607751115Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 21 14:50:38 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:38.606964494Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:50:38 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:38.609715004Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937184"
	Nov 21 14:50:38 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:38.612406635Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:50:38 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:38.616703332Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 21 14:50:38 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:38.617270659Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.006612214s"
	Nov 21 14:50:38 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:38.617896265Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 21 14:50:38 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:38.629789486Z" level=info msg="CreateContainer within sandbox \"0456a1d55983c73b15cc214eba92e5445c10dab7ef36fb11ac58e9e1db320e1a\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 21 14:50:38 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:38.644149244Z" level=info msg="Container dc94243895048aab8eac12746934f101128987a00c4b97bca12e5f8ec34f3421: CDI devices from CRI Config.CDIDevices: []"
	Nov 21 14:50:38 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:38.654545473Z" level=info msg="CreateContainer within sandbox \"0456a1d55983c73b15cc214eba92e5445c10dab7ef36fb11ac58e9e1db320e1a\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"dc94243895048aab8eac12746934f101128987a00c4b97bca12e5f8ec34f3421\""
	Nov 21 14:50:38 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:38.655278490Z" level=info msg="StartContainer for \"dc94243895048aab8eac12746934f101128987a00c4b97bca12e5f8ec34f3421\""
	Nov 21 14:50:38 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:38.656483699Z" level=info msg="connecting to shim dc94243895048aab8eac12746934f101128987a00c4b97bca12e5f8ec34f3421" address="unix:///run/containerd/s/4494cacb352d21ad48189cd449b3edac4190b183c5475793a0455bb8e49371fb" protocol=ttrpc version=3
	Nov 21 14:50:38 default-k8s-diff-port-219338 containerd[756]: time="2025-11-21T14:50:38.760507941Z" level=info msg="StartContainer for \"dc94243895048aab8eac12746934f101128987a00c4b97bca12e5f8ec34f3421\" returns successfully"
	
	
	==> coredns [34a9910078724e4286cb3b5efe631cb00e516b079c2f6e3bb2cd9ef354159e19] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50270 - 23985 "HINFO IN 1567346524217854883.1598740298227831180. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033628282s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-219338
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-219338
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=default-k8s-diff-port-219338
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_49_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:49:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-219338
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:50:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:50:47 +0000   Fri, 21 Nov 2025 14:49:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:50:47 +0000   Fri, 21 Nov 2025 14:49:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:50:47 +0000   Fri, 21 Nov 2025 14:49:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:50:47 +0000   Fri, 21 Nov 2025 14:50:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-219338
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                b7fd1707-d8d3-495c-9e74-3485b60b03c0
	  Boot ID:                    41b0e09d-5a9a-49c9-8980-dca608ba3fce
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-66bc5c9577-6g67n                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     59s
	  kube-system                 etcd-default-k8s-diff-port-219338                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         64s
	  kube-system                 kindnet-l9ck4                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      59s
	  kube-system                 kube-apiserver-default-k8s-diff-port-219338             250m (12%)    0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-219338    200m (10%)    0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-proxy-s4wjg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-scheduler-default-k8s-diff-port-219338             100m (5%)     0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 57s                kube-proxy       
	  Warning  CgroupV1                 74s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  74s (x8 over 74s)  kubelet          Node default-k8s-diff-port-219338 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    74s (x8 over 74s)  kubelet          Node default-k8s-diff-port-219338 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     74s (x7 over 74s)  kubelet          Node default-k8s-diff-port-219338 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 64s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  64s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  64s                kubelet          Node default-k8s-diff-port-219338 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s                kubelet          Node default-k8s-diff-port-219338 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s                kubelet          Node default-k8s-diff-port-219338 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           60s                node-controller  Node default-k8s-diff-port-219338 event: Registered Node default-k8s-diff-port-219338 in Controller
	  Normal   NodeReady                17s                kubelet          Node default-k8s-diff-port-219338 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 13:02] overlayfs: idmapped layers are currently not supported
	[Nov21 13:03] overlayfs: idmapped layers are currently not supported
	[Nov21 13:06] overlayfs: idmapped layers are currently not supported
	[Nov21 13:08] overlayfs: idmapped layers are currently not supported
	[Nov21 13:09] overlayfs: idmapped layers are currently not supported
	[Nov21 13:10] overlayfs: idmapped layers are currently not supported
	[ +19.808801] overlayfs: idmapped layers are currently not supported
	[Nov21 13:11] overlayfs: idmapped layers are currently not supported
	[Nov21 13:12] overlayfs: idmapped layers are currently not supported
	[Nov21 13:13] overlayfs: idmapped layers are currently not supported
	[Nov21 13:14] overlayfs: idmapped layers are currently not supported
	[Nov21 13:15] overlayfs: idmapped layers are currently not supported
	[ +16.772572] overlayfs: idmapped layers are currently not supported
	[Nov21 13:16] overlayfs: idmapped layers are currently not supported
	[Nov21 13:17] overlayfs: idmapped layers are currently not supported
	[ +27.396777] overlayfs: idmapped layers are currently not supported
	[Nov21 13:18] overlayfs: idmapped layers are currently not supported
	[ +25.430119] overlayfs: idmapped layers are currently not supported
	[Nov21 13:19] overlayfs: idmapped layers are currently not supported
	[Nov21 13:20] overlayfs: idmapped layers are currently not supported
	[Nov21 13:21] overlayfs: idmapped layers are currently not supported
	[Nov21 13:22] overlayfs: idmapped layers are currently not supported
	[Nov21 13:23] overlayfs: idmapped layers are currently not supported
	[Nov21 13:24] overlayfs: idmapped layers are currently not supported
	[Nov21 13:55] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [ff190b41749d4b1c775eb7e2155bf86c6a1cabc60eaad113d5260a138ec7daae] <==
	{"level":"warn","ts":"2025-11-21T14:49:40.359990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.373514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.392528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.418756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.434245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.449724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.474886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.492940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.510857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.546010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.565414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.590677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.616225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.637342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.668820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.689490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.709794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.746463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.775509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.816296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.845673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.866401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.902108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:40.922029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:49:41.027994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60858","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:50:49 up 19:33,  0 user,  load average: 3.36, 3.63, 3.12
	Linux default-k8s-diff-port-219338 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d26cbda2f8de8f2cb0058c7438fbcd477c081f160b63c206ee4a2c103f58aade] <==
	I1121 14:49:51.930652       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:49:51.949125       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1121 14:49:51.949288       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:49:51.949306       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:49:51.949322       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:49:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:49:52.233307       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:49:52.233340       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:49:52.233351       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:49:52.233730       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1121 14:50:22.229524       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1121 14:50:22.230817       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1121 14:50:22.234291       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1121 14:50:22.234414       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1121 14:50:23.733436       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:50:23.733473       1 metrics.go:72] Registering metrics
	I1121 14:50:23.733524       1 controller.go:711] "Syncing nftables rules"
	I1121 14:50:32.235267       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:50:32.235319       1 main.go:301] handling current node
	I1121 14:50:42.229177       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:50:42.229331       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c90873fe60dcb064026fce4fb3a55bb07a32b04e36cec547b82f4c4421d0a85a] <==
	I1121 14:49:42.260141       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1121 14:49:42.278234       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 14:49:42.285137       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:49:42.316304       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:49:42.316918       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1121 14:49:42.378321       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:49:42.386592       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 14:49:42.839899       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1121 14:49:42.860378       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1121 14:49:42.860404       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:49:43.975681       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:49:44.064705       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:49:44.148146       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:49:44.207288       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1121 14:49:44.228552       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1121 14:49:44.230019       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 14:49:44.238137       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:49:45.158757       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 14:49:45.180057       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1121 14:49:45.209198       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1121 14:49:49.225934       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:49:49.255116       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:49:50.039110       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1121 14:49:50.129991       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1121 14:50:45.442043       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:55322: use of closed network connection
	
	
	==> kube-controller-manager [236365ed0b89eac33c6dcbdf426b5f9ce5ac85899aba3be307c923d96de4f8d6] <==
	I1121 14:49:49.207848       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1121 14:49:49.207887       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1121 14:49:49.207920       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1121 14:49:49.207925       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1121 14:49:49.207929       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1121 14:49:49.219806       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1121 14:49:49.229391       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-219338" podCIDRs=["10.244.0.0/24"]
	I1121 14:49:49.241612       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1121 14:49:49.241819       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1121 14:49:49.241961       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1121 14:49:49.241989       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 14:49:49.242382       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1121 14:49:49.242418       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1121 14:49:49.244519       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1121 14:49:49.253155       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1121 14:49:49.253301       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1121 14:49:49.253645       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 14:49:49.253707       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 14:49:49.253769       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1121 14:49:49.254259       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1121 14:49:49.257864       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1121 14:49:49.261082       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:49:49.269533       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:49:49.270874       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1121 14:50:34.205748       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [af1bebac5b832d228c4a45b9f1a0a1ba0aef88ec756c7611d430e1ed7b9856ec] <==
	I1121 14:49:51.942190       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:49:52.083028       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:49:52.191890       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:49:52.191928       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1121 14:49:52.192006       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:49:52.361385       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:49:52.361438       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:49:52.365762       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:49:52.366083       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:49:52.366119       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:49:52.368037       1 config.go:200] "Starting service config controller"
	I1121 14:49:52.368054       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:49:52.368070       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:49:52.368074       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:49:52.368087       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:49:52.368091       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:49:52.371814       1 config.go:309] "Starting node config controller"
	I1121 14:49:52.371830       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:49:52.371838       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:49:52.468643       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 14:49:52.468691       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 14:49:52.468730       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8e9e2caeab703d91901a33693a056e2c031273a1789496fdfee9409afb25376a] <==
	E1121 14:49:42.340560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 14:49:42.340614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 14:49:42.348403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 14:49:42.348705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 14:49:42.348854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 14:49:42.348635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 14:49:42.349137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 14:49:42.349268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 14:49:42.349384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 14:49:42.349430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 14:49:42.351908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 14:49:43.138495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:49:43.197812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 14:49:43.223692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 14:49:43.277384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 14:49:43.300616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 14:49:43.327519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 14:49:43.333576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 14:49:43.353839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1121 14:49:43.362189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 14:49:43.444091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 14:49:43.501282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 14:49:43.560180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 14:49:43.582367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1121 14:49:45.933540       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:49:46 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:49:46.566221    1478 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-default-k8s-diff-port-219338"
	Nov 21 14:49:46 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:49:46.566402    1478 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-default-k8s-diff-port-219338"
	Nov 21 14:49:46 default-k8s-diff-port-219338 kubelet[1478]: E1121 14:49:46.617314    1478 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-default-k8s-diff-port-219338\" already exists" pod="kube-system/kube-scheduler-default-k8s-diff-port-219338"
	Nov 21 14:49:46 default-k8s-diff-port-219338 kubelet[1478]: E1121 14:49:46.631410    1478 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-219338\" already exists" pod="kube-system/etcd-default-k8s-diff-port-219338"
	Nov 21 14:49:49 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:49:49.251994    1478 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 21 14:49:49 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:49:49.258203    1478 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 21 14:49:50 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:49:50.366847    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5b6dee86-9d86-4845-bdde-7a7b7e9ca99d-cni-cfg\") pod \"kindnet-l9ck4\" (UID: \"5b6dee86-9d86-4845-bdde-7a7b7e9ca99d\") " pod="kube-system/kindnet-l9ck4"
	Nov 21 14:49:50 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:49:50.366913    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b6dee86-9d86-4845-bdde-7a7b7e9ca99d-xtables-lock\") pod \"kindnet-l9ck4\" (UID: \"5b6dee86-9d86-4845-bdde-7a7b7e9ca99d\") " pod="kube-system/kindnet-l9ck4"
	Nov 21 14:49:50 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:49:50.366953    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b6dee86-9d86-4845-bdde-7a7b7e9ca99d-lib-modules\") pod \"kindnet-l9ck4\" (UID: \"5b6dee86-9d86-4845-bdde-7a7b7e9ca99d\") " pod="kube-system/kindnet-l9ck4"
	Nov 21 14:49:50 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:49:50.366973    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d73e057f-eb19-4beb-bbba-590064218611-xtables-lock\") pod \"kube-proxy-s4wjg\" (UID: \"d73e057f-eb19-4beb-bbba-590064218611\") " pod="kube-system/kube-proxy-s4wjg"
	Nov 21 14:49:50 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:49:50.366992    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75fd6\" (UniqueName: \"kubernetes.io/projected/d73e057f-eb19-4beb-bbba-590064218611-kube-api-access-75fd6\") pod \"kube-proxy-s4wjg\" (UID: \"d73e057f-eb19-4beb-bbba-590064218611\") " pod="kube-system/kube-proxy-s4wjg"
	Nov 21 14:49:50 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:49:50.367020    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d73e057f-eb19-4beb-bbba-590064218611-kube-proxy\") pod \"kube-proxy-s4wjg\" (UID: \"d73e057f-eb19-4beb-bbba-590064218611\") " pod="kube-system/kube-proxy-s4wjg"
	Nov 21 14:49:50 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:49:50.367041    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d73e057f-eb19-4beb-bbba-590064218611-lib-modules\") pod \"kube-proxy-s4wjg\" (UID: \"d73e057f-eb19-4beb-bbba-590064218611\") " pod="kube-system/kube-proxy-s4wjg"
	Nov 21 14:49:50 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:49:50.367060    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn9qr\" (UniqueName: \"kubernetes.io/projected/5b6dee86-9d86-4845-bdde-7a7b7e9ca99d-kube-api-access-vn9qr\") pod \"kindnet-l9ck4\" (UID: \"5b6dee86-9d86-4845-bdde-7a7b7e9ca99d\") " pod="kube-system/kindnet-l9ck4"
	Nov 21 14:49:50 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:49:50.570506    1478 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 21 14:49:52 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:49:52.659960    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-l9ck4" podStartSLOduration=2.659940744 podStartE2EDuration="2.659940744s" podCreationTimestamp="2025-11-21 14:49:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:49:52.637768115 +0000 UTC m=+7.585059792" watchObservedRunningTime="2025-11-21 14:49:52.659940744 +0000 UTC m=+7.607232397"
	Nov 21 14:49:53 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:49:53.801749    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s4wjg" podStartSLOduration=3.801671756 podStartE2EDuration="3.801671756s" podCreationTimestamp="2025-11-21 14:49:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:49:52.68807686 +0000 UTC m=+7.635368513" watchObservedRunningTime="2025-11-21 14:49:53.801671756 +0000 UTC m=+8.748963409"
	Nov 21 14:50:32 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:50:32.250096    1478 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 21 14:50:32 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:50:32.461782    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/dc9557e1-c6ed-40d6-b415-f6273f55c51b-tmp\") pod \"storage-provisioner\" (UID: \"dc9557e1-c6ed-40d6-b415-f6273f55c51b\") " pod="kube-system/storage-provisioner"
	Nov 21 14:50:32 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:50:32.461884    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvbb7\" (UniqueName: \"kubernetes.io/projected/dc9557e1-c6ed-40d6-b415-f6273f55c51b-kube-api-access-vvbb7\") pod \"storage-provisioner\" (UID: \"dc9557e1-c6ed-40d6-b415-f6273f55c51b\") " pod="kube-system/storage-provisioner"
	Nov 21 14:50:32 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:50:32.461928    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66f36548-8cb3-4eee-b5e7-abfe4d6a0195-config-volume\") pod \"coredns-66bc5c9577-6g67n\" (UID: \"66f36548-8cb3-4eee-b5e7-abfe4d6a0195\") " pod="kube-system/coredns-66bc5c9577-6g67n"
	Nov 21 14:50:32 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:50:32.461966    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96vm9\" (UniqueName: \"kubernetes.io/projected/66f36548-8cb3-4eee-b5e7-abfe4d6a0195-kube-api-access-96vm9\") pod \"coredns-66bc5c9577-6g67n\" (UID: \"66f36548-8cb3-4eee-b5e7-abfe4d6a0195\") " pod="kube-system/coredns-66bc5c9577-6g67n"
	Nov 21 14:50:33 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:50:33.740602    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6g67n" podStartSLOduration=43.740584105 podStartE2EDuration="43.740584105s" podCreationTimestamp="2025-11-21 14:49:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:50:33.740580889 +0000 UTC m=+48.687872542" watchObservedRunningTime="2025-11-21 14:50:33.740584105 +0000 UTC m=+48.687875758"
	Nov 21 14:50:36 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:50:36.129098    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=44.129079089 podStartE2EDuration="44.129079089s" podCreationTimestamp="2025-11-21 14:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:50:33.80377567 +0000 UTC m=+48.751067323" watchObservedRunningTime="2025-11-21 14:50:36.129079089 +0000 UTC m=+51.076370742"
	Nov 21 14:50:36 default-k8s-diff-port-219338 kubelet[1478]: I1121 14:50:36.206582    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thvsz\" (UniqueName: \"kubernetes.io/projected/a18b0d57-5cfa-4219-961a-f30cbe26f965-kube-api-access-thvsz\") pod \"busybox\" (UID: \"a18b0d57-5cfa-4219-961a-f30cbe26f965\") " pod="default/busybox"
	
	
	==> storage-provisioner [8863e2663208fed91ac25fb86ba5070660b82feb717ecda35bc3e85e7c7ebcfe] <==
	I1121 14:50:33.050777       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1121 14:50:33.054951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:50:33.072546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:50:33.073391       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 14:50:33.077000       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-219338_d3d78954-0b96-40b0-956b-b1b8a98482be!
	I1121 14:50:33.085393       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f2e79590-cffa-43b0-82e0-17087e2e2242", APIVersion:"v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-219338_d3d78954-0b96-40b0-956b-b1b8a98482be became leader
	W1121 14:50:33.087330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:50:33.112554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:50:33.183438       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-219338_d3d78954-0b96-40b0-956b-b1b8a98482be!
	W1121 14:50:35.116221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:50:35.280811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:50:37.285418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:50:37.294598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:50:39.298443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:50:39.320971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:50:41.324737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:50:41.335625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:50:43.339547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:50:43.344231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:50:45.362330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:50:45.375108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:50:47.394314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:50:47.414656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:50:49.417935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:50:49.450428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-219338 -n default-k8s-diff-port-219338
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-219338 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (14.97s)

                                                
                                    

Test pass (299/333)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 23.5
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.2
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 5.71
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
27 TestAddons/Setup 163.99
29 TestAddons/serial/Volcano 40.71
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 10.85
35 TestAddons/parallel/Registry 16.75
36 TestAddons/parallel/RegistryCreds 0.8
37 TestAddons/parallel/Ingress 19.93
38 TestAddons/parallel/InspektorGadget 11.82
39 TestAddons/parallel/MetricsServer 6.02
41 TestAddons/parallel/CSI 45.52
42 TestAddons/parallel/Headlamp 17.9
43 TestAddons/parallel/CloudSpanner 5.67
44 TestAddons/parallel/LocalPath 10.93
45 TestAddons/parallel/NvidiaDevicePlugin 7.03
46 TestAddons/parallel/Yakd 10.86
48 TestAddons/StoppedEnableDisable 12.37
49 TestCertOptions 34.85
50 TestCertExpiration 230.89
52 TestForceSystemdFlag 45.3
53 TestForceSystemdEnv 48.53
54 TestDockerEnvContainerd 49.1
58 TestErrorSpam/setup 33.25
59 TestErrorSpam/start 0.8
60 TestErrorSpam/status 1.14
61 TestErrorSpam/pause 1.71
62 TestErrorSpam/unpause 1.82
63 TestErrorSpam/stop 1.6
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 77.97
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 7.55
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.5
75 TestFunctional/serial/CacheCmd/cache/add_local 1.23
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.86
80 TestFunctional/serial/CacheCmd/cache/delete 0.14
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 41.95
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.49
86 TestFunctional/serial/LogsFileCmd 1.84
87 TestFunctional/serial/InvalidService 4.32
89 TestFunctional/parallel/ConfigCmd 0.49
90 TestFunctional/parallel/DashboardCmd 7.89
91 TestFunctional/parallel/DryRun 0.45
92 TestFunctional/parallel/InternationalLanguage 0.26
93 TestFunctional/parallel/StatusCmd 1.32
97 TestFunctional/parallel/ServiceCmdConnect 9.61
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 26.9
101 TestFunctional/parallel/SSHCmd 0.72
102 TestFunctional/parallel/CpCmd 2.52
104 TestFunctional/parallel/FileSync 0.41
105 TestFunctional/parallel/CertSync 2.25
109 TestFunctional/parallel/NodeLabels 0.32
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.68
113 TestFunctional/parallel/License 0.42
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.73
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.4
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 7.22
126 TestFunctional/parallel/ServiceCmd/List 0.52
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
128 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
129 TestFunctional/parallel/ServiceCmd/Format 0.45
130 TestFunctional/parallel/ServiceCmd/URL 0.49
131 TestFunctional/parallel/ProfileCmd/profile_not_create 0.62
132 TestFunctional/parallel/MountCmd/any-port 8.62
133 TestFunctional/parallel/ProfileCmd/profile_list 0.61
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.49
135 TestFunctional/parallel/MountCmd/specific-port 2.41
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.1
137 TestFunctional/parallel/Version/short 0.08
138 TestFunctional/parallel/Version/components 1.35
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.91
144 TestFunctional/parallel/ImageCommands/Setup 0.67
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.25
148 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.42
149 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.29
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.5
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.48
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.63
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.38
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 207.94
163 TestMultiControlPlane/serial/DeployApp 7.83
164 TestMultiControlPlane/serial/PingHostFromPods 1.7
165 TestMultiControlPlane/serial/AddWorkerNode 60.38
166 TestMultiControlPlane/serial/NodeLabels 0.12
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.07
168 TestMultiControlPlane/serial/CopyFile 20.58
169 TestMultiControlPlane/serial/StopSecondaryNode 13.01
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.83
171 TestMultiControlPlane/serial/RestartSecondaryNode 15.52
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.12
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 96.8
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.81
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.82
176 TestMultiControlPlane/serial/StopCluster 36.41
177 TestMultiControlPlane/serial/RestartCluster 61.56
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.79
179 TestMultiControlPlane/serial/AddSecondaryNode 63.93
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.2
185 TestJSONOutput/start/Command 81.42
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.71
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.62
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.02
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 76.38
211 TestKicCustomNetwork/use_default_bridge_network 41.71
212 TestKicExistingNetwork 38.51
213 TestKicCustomSubnet 36.44
214 TestKicStaticIP 37.37
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 75.33
219 TestMountStart/serial/StartWithMountFirst 8.22
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 9.76
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.72
224 TestMountStart/serial/VerifyMountPostDelete 0.28
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 7.91
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 76.94
231 TestMultiNode/serial/DeployApp2Nodes 5.12
232 TestMultiNode/serial/PingHostFrom2Pods 1
233 TestMultiNode/serial/AddNode 58.3
234 TestMultiNode/serial/MultiNodeLabels 0.08
235 TestMultiNode/serial/ProfileList 0.72
236 TestMultiNode/serial/CopyFile 10.35
237 TestMultiNode/serial/StopNode 2.39
238 TestMultiNode/serial/StartAfterStop 8.13
239 TestMultiNode/serial/RestartKeepsNodes 81.74
240 TestMultiNode/serial/DeleteNode 5.76
241 TestMultiNode/serial/StopMultiNode 24.08
242 TestMultiNode/serial/RestartMultiNode 51.96
243 TestMultiNode/serial/ValidateNameConflict 38.83
248 TestPreload 132.89
250 TestScheduledStopUnix 109.21
253 TestInsufficientStorage 13.4
254 TestRunningBinaryUpgrade 67.25
256 TestKubernetesUpgrade 351.27
257 TestMissingContainerUpgrade 150.85
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
260 TestNoKubernetes/serial/StartWithK8s 47.07
261 TestNoKubernetes/serial/StartWithStopK8s 24.25
262 TestNoKubernetes/serial/Start 8.37
263 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
265 TestNoKubernetes/serial/ProfileList 0.7
266 TestNoKubernetes/serial/Stop 1.29
267 TestNoKubernetes/serial/StartNoArgs 7.83
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.48
269 TestStoppedBinaryUpgrade/Setup 8.48
270 TestStoppedBinaryUpgrade/Upgrade 58.83
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.67
280 TestPause/serial/Start 84.89
281 TestPause/serial/SecondStartNoReconfiguration 7.19
282 TestPause/serial/Pause 0.74
283 TestPause/serial/VerifyStatus 0.44
284 TestPause/serial/Unpause 0.81
285 TestPause/serial/PauseAgain 0.78
286 TestPause/serial/DeletePaused 2.53
287 TestPause/serial/VerifyDeletedResources 0.44
295 TestNetworkPlugins/group/false 5.5
300 TestStartStop/group/old-k8s-version/serial/FirstStart 60.52
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.18
303 TestStartStop/group/old-k8s-version/serial/Stop 12.12
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
305 TestStartStop/group/old-k8s-version/serial/SecondStart 51.61
306 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
307 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.16
308 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
309 TestStartStop/group/old-k8s-version/serial/Pause 5.05
311 TestStartStop/group/no-preload/serial/FirstStart 71.23
313 TestStartStop/group/embed-certs/serial/FirstStart 92.01
315 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.12
316 TestStartStop/group/no-preload/serial/Stop 12.17
318 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
319 TestStartStop/group/no-preload/serial/SecondStart 62.62
320 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.46
321 TestStartStop/group/embed-certs/serial/Stop 12.95
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.31
323 TestStartStop/group/embed-certs/serial/SecondStart 51.54
324 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
325 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
326 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
327 TestStartStop/group/no-preload/serial/Pause 3.2
329 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 86.37
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
333 TestStartStop/group/embed-certs/serial/Pause 4.09
335 TestStartStop/group/newest-cni/serial/FirstStart 40.14
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.26
338 TestStartStop/group/newest-cni/serial/Stop 1.36
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
340 TestStartStop/group/newest-cni/serial/SecondStart 17.67
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
344 TestStartStop/group/newest-cni/serial/Pause 3.01
345 TestNetworkPlugins/group/auto/Start 84.99
347 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.47
348 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.43
349 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.32
350 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 52.91
351 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
352 TestNetworkPlugins/group/auto/KubeletFlags 0.7
353 TestNetworkPlugins/group/auto/NetCatPod 10.3
354 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
355 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
356 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.42
357 TestNetworkPlugins/group/auto/DNS 0.25
358 TestNetworkPlugins/group/auto/Localhost 0.22
359 TestNetworkPlugins/group/auto/HairPin 0.18
360 TestNetworkPlugins/group/kindnet/Start 82.98
361 TestNetworkPlugins/group/calico/Start 63.59
362 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
365 TestNetworkPlugins/group/kindnet/NetCatPod 11.35
366 TestNetworkPlugins/group/calico/KubeletFlags 0.39
367 TestNetworkPlugins/group/calico/NetCatPod 10.41
368 TestNetworkPlugins/group/kindnet/DNS 0.23
369 TestNetworkPlugins/group/kindnet/Localhost 0.15
370 TestNetworkPlugins/group/kindnet/HairPin 0.16
371 TestNetworkPlugins/group/calico/DNS 0.17
372 TestNetworkPlugins/group/calico/Localhost 0.15
373 TestNetworkPlugins/group/calico/HairPin 0.17
374 TestNetworkPlugins/group/custom-flannel/Start 64.07
375 TestNetworkPlugins/group/enable-default-cni/Start 53.9
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.28
378 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
379 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.28
380 TestNetworkPlugins/group/enable-default-cni/DNS 0.26
381 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
382 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
383 TestNetworkPlugins/group/custom-flannel/DNS 0.24
384 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
385 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
386 TestNetworkPlugins/group/flannel/Start 63.88
387 TestNetworkPlugins/group/bridge/Start 80.73
388 TestNetworkPlugins/group/flannel/ControllerPod 6.01
389 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
390 TestNetworkPlugins/group/flannel/NetCatPod 10.3
391 TestNetworkPlugins/group/flannel/DNS 0.17
392 TestNetworkPlugins/group/flannel/Localhost 0.15
393 TestNetworkPlugins/group/flannel/HairPin 0.16
394 TestNetworkPlugins/group/bridge/KubeletFlags 0.46
395 TestNetworkPlugins/group/bridge/NetCatPod 9.36
396 TestNetworkPlugins/group/bridge/DNS 0.26
397 TestNetworkPlugins/group/bridge/Localhost 0.19
398 TestNetworkPlugins/group/bridge/HairPin 0.25
x
+
TestDownloadOnly/v1.28.0/json-events (23.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-307941 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-307941 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (23.503542495s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (23.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1121 13:56:38.276271 2635785 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1121 13:56:38.276345 2635785 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-307941
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-307941: exit status 85 (70.926189ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-307941 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-307941 │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 13:56:14
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 13:56:14.817261 2635791 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:56:14.817527 2635791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:56:14.817540 2635791 out.go:374] Setting ErrFile to fd 2...
	I1121 13:56:14.817553 2635791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:56:14.818185 2635791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-2633933/.minikube/bin
	W1121 13:56:14.818399 2635791 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21847-2633933/.minikube/config/config.json: open /home/jenkins/minikube-integration/21847-2633933/.minikube/config/config.json: no such file or directory
	I1121 13:56:14.818904 2635791 out.go:368] Setting JSON to true
	I1121 13:56:14.819785 2635791 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":67123,"bootTime":1763666252,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1121 13:56:14.819881 2635791 start.go:143] virtualization:  
	I1121 13:56:14.823978 2635791 out.go:99] [download-only-307941] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1121 13:56:14.824212 2635791 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/preloaded-tarball: no such file or directory
	I1121 13:56:14.824271 2635791 notify.go:221] Checking for updates...
	I1121 13:56:14.827059 2635791 out.go:171] MINIKUBE_LOCATION=21847
	I1121 13:56:14.830153 2635791 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 13:56:14.833353 2635791 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21847-2633933/kubeconfig
	I1121 13:56:14.836230 2635791 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-2633933/.minikube
	I1121 13:56:14.839099 2635791 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1121 13:56:14.844586 2635791 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1121 13:56:14.844850 2635791 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 13:56:14.875911 2635791 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 13:56:14.876086 2635791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 13:56:14.935199 2635791 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-21 13:56:14.925654763 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 13:56:14.935303 2635791 docker.go:319] overlay module found
	I1121 13:56:14.938302 2635791 out.go:99] Using the docker driver based on user configuration
	I1121 13:56:14.938339 2635791 start.go:309] selected driver: docker
	I1121 13:56:14.938346 2635791 start.go:930] validating driver "docker" against <nil>
	I1121 13:56:14.938461 2635791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 13:56:14.994260 2635791 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-21 13:56:14.985623036 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 13:56:14.994422 2635791 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 13:56:14.994694 2635791 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1121 13:56:14.994852 2635791 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1121 13:56:14.998067 2635791 out.go:171] Using Docker driver with root privileges
	I1121 13:56:15.012795 2635791 cni.go:84] Creating CNI manager for ""
	I1121 13:56:15.012894 2635791 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 13:56:15.012905 2635791 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 13:56:15.013005 2635791 start.go:353] cluster config:
	{Name:download-only-307941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-307941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 13:56:15.016389 2635791 out.go:99] Starting "download-only-307941" primary control-plane node in "download-only-307941" cluster
	I1121 13:56:15.016441 2635791 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1121 13:56:15.019647 2635791 out.go:99] Pulling base image v0.0.48-1763507788-21924 ...
	I1121 13:56:15.019728 2635791 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1121 13:56:15.019921 2635791 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 13:56:15.037423 2635791 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1121 13:56:15.037635 2635791 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory
	I1121 13:56:15.037737 2635791 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1121 13:56:15.073105 2635791 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1121 13:56:15.073134 2635791 cache.go:65] Caching tarball of preloaded images
	I1121 13:56:15.073298 2635791 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1121 13:56:15.076613 2635791 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1121 13:56:15.076655 2635791 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1121 13:56:15.167518 2635791 preload.go:295] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1121 13:56:15.167650 2635791 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1121 13:56:20.975158 2635791 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a as a tarball
	
	
	* The control-plane node download-only-307941 host does not exist
	  To start a cluster, run: "minikube start -p download-only-307941"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-307941
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (5.71s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-060041 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-060041 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.705983074s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (5.71s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1121 13:56:44.388387 2635785 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1121 13:56:44.388426 2635785 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-060041
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-060041: exit status 85 (67.315882ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-307941 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-307941 │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │ 21 Nov 25 13:56 UTC │
	│ delete  │ -p download-only-307941                                                                                                                                                               │ download-only-307941 │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │ 21 Nov 25 13:56 UTC │
	│ start   │ -o=json --download-only -p download-only-060041 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-060041 │ jenkins │ v1.37.0 │ 21 Nov 25 13:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 13:56:38
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 13:56:38.723021 2635984 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:56:38.723138 2635984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:56:38.723150 2635984 out.go:374] Setting ErrFile to fd 2...
	I1121 13:56:38.723155 2635984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:56:38.723394 2635984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-2633933/.minikube/bin
	I1121 13:56:38.723784 2635984 out.go:368] Setting JSON to true
	I1121 13:56:38.724607 2635984 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":67147,"bootTime":1763666252,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1121 13:56:38.724677 2635984 start.go:143] virtualization:  
	I1121 13:56:38.726544 2635984 out.go:99] [download-only-060041] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 13:56:38.726791 2635984 notify.go:221] Checking for updates...
	I1121 13:56:38.728478 2635984 out.go:171] MINIKUBE_LOCATION=21847
	I1121 13:56:38.729836 2635984 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 13:56:38.731537 2635984 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21847-2633933/kubeconfig
	I1121 13:56:38.732951 2635984 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-2633933/.minikube
	I1121 13:56:38.734418 2635984 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1121 13:56:38.736921 2635984 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1121 13:56:38.737205 2635984 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 13:56:38.757937 2635984 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 13:56:38.758057 2635984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 13:56:38.825149 2635984 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-21 13:56:38.815010049 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 13:56:38.825267 2635984 docker.go:319] overlay module found
	I1121 13:56:38.826762 2635984 out.go:99] Using the docker driver based on user configuration
	I1121 13:56:38.826802 2635984 start.go:309] selected driver: docker
	I1121 13:56:38.826809 2635984 start.go:930] validating driver "docker" against <nil>
	I1121 13:56:38.826908 2635984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 13:56:38.882601 2635984 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-21 13:56:38.873105604 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 13:56:38.882760 2635984 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 13:56:38.883052 2635984 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1121 13:56:38.883211 2635984 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1121 13:56:38.884950 2635984 out.go:171] Using Docker driver with root privileges
	I1121 13:56:38.886231 2635984 cni.go:84] Creating CNI manager for ""
	I1121 13:56:38.886304 2635984 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 13:56:38.886326 2635984 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 13:56:38.886409 2635984 start.go:353] cluster config:
	{Name:download-only-060041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-060041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 13:56:38.887972 2635984 out.go:99] Starting "download-only-060041" primary control-plane node in "download-only-060041" cluster
	I1121 13:56:38.887997 2635984 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1121 13:56:38.889437 2635984 out.go:99] Pulling base image v0.0.48-1763507788-21924 ...
	I1121 13:56:38.889493 2635984 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 13:56:38.889599 2635984 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 13:56:38.905522 2635984 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1121 13:56:38.905696 2635984 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory
	I1121 13:56:38.905723 2635984 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory, skipping pull
	I1121 13:56:38.905729 2635984 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in cache, skipping pull
	I1121 13:56:38.905736 2635984 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a as a tarball
	I1121 13:56:38.948163 2635984 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1121 13:56:38.948208 2635984 cache.go:65] Caching tarball of preloaded images
	I1121 13:56:38.948402 2635984 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 13:56:38.950104 2635984 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1121 13:56:38.950133 2635984 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1121 13:56:39.030649 2635984 preload.go:295] Got checksum from GCS API "435977642a202d20ca04f26d87d875a8"
	I1121 13:56:39.030715 2635984 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:435977642a202d20ca04f26d87d875a8 -> /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-060041 host does not exist
	  To start a cluster, run: "minikube start -p download-only-060041"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-060041
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1121 13:56:45.626799 2635785 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-346290 --alsologtostderr --binary-mirror http://127.0.0.1:45347 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-346290" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-346290
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-891209
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-891209: exit status 85 (67.610979ms)

                                                
                                                
-- stdout --
	* Profile "addons-891209" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-891209"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-891209
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-891209: exit status 85 (94.850423ms)

                                                
                                                
-- stdout --
	* Profile "addons-891209" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-891209"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (163.99s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-891209 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-891209 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m43.991596364s)
--- PASS: TestAddons/Setup (163.99s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.71s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:868: volcano-scheduler stabilized in 67.881435ms
addons_test.go:876: volcano-admission stabilized in 68.570678ms
addons_test.go:884: volcano-controller stabilized in 68.597041ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-8hr44" [d1506dbd-f8bf-465e-afcc-e88945173f2e] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003591168s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-9n7k2" [2090d2dd-c480-4f0a-9f73-dc1eba6f734a] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004037146s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-2l2kb" [69719cb5-4d04-49bb-9ef1-7e7d6ffc7195] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003571327s
addons_test.go:903: (dbg) Run:  kubectl --context addons-891209 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-891209 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-891209 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [e0202524-c13e-426e-ae80-0f0efbb37389] Pending
helpers_test.go:352: "test-job-nginx-0" [e0202524-c13e-426e-ae80-0f0efbb37389] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [e0202524-c13e-426e-ae80-0f0efbb37389] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.003340151s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-891209 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-891209 addons disable volcano --alsologtostderr -v=1: (12.020778716s)
--- PASS: TestAddons/serial/Volcano (40.71s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-891209 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-891209 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.85s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-891209 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-891209 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f2f288f1-83c0-4f9b-803a-a00f6da553a5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f2f288f1-83c0-4f9b-803a-a00f6da553a5] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.002858125s
addons_test.go:694: (dbg) Run:  kubectl --context addons-891209 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-891209 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-891209 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-891209 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.85s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 5.44314ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-cwxnc" [d1e1909c-c1ea-4732-9f38-3c69d39c9bd6] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003953756s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-n8wq2" [ef5b39d4-f41a-4e80-82a2-9b3904922e3c] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005768016s
addons_test.go:392: (dbg) Run:  kubectl --context addons-891209 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-891209 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-891209 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.692813313s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-891209 ip
2025/11/21 14:00:47 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-891209 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.75s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.8s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 8.101951ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-891209
addons_test.go:332: (dbg) Run:  kubectl --context addons-891209 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-891209 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.80s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-891209 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-891209 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-891209 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [b612d656-e0c0-4591-bb37-6c6668e4e2c1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [b612d656-e0c0-4591-bb37-6c6668e4e2c1] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003437857s
I1121 14:01:40.122752 2635785 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-891209 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-891209 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-891209 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-891209 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-891209 addons disable ingress-dns --alsologtostderr -v=1: (1.405817799s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-891209 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-891209 addons disable ingress --alsologtostderr -v=1: (7.8498307s)
--- PASS: TestAddons/parallel/Ingress (19.93s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.82s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-k9vnm" [2d2b4cc1-cc9a-4b1d-8c86-820ed3f39ff1] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003828233s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-891209 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-891209 addons disable inspektor-gadget --alsologtostderr -v=1: (5.811603582s)
--- PASS: TestAddons/parallel/InspektorGadget (11.82s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.02s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.550403ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-httql" [1736e31f-19e5-40c3-9d78-f39cb7715b07] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003457553s
addons_test.go:463: (dbg) Run:  kubectl --context addons-891209 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-891209 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.02s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1121 14:00:58.882219 2635785 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1121 14:00:58.885315 2635785 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1121 14:00:58.885344 2635785 kapi.go:107] duration metric: took 6.232352ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.242977ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-891209 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891209 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891209 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891209 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891209 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891209 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891209 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891209 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891209 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891209 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891209 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891209 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891209 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891209 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-891209 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [885fcf38-4a1c-4143-b850-702595eb2bce] Pending
helpers_test.go:352: "task-pv-pod" [885fcf38-4a1c-4143-b850-702595eb2bce] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.015793256s
addons_test.go:572: (dbg) Run:  kubectl --context addons-891209 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-891209 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-891209 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-891209 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-891209 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-891209 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891209 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891209 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891209 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891209 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891209 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891209 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891209 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-891209 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [b051465b-8725-484c-b58c-887e321efb07] Pending
helpers_test.go:352: "task-pv-pod-restore" [b051465b-8725-484c-b58c-887e321efb07] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [b051465b-8725-484c-b58c-887e321efb07] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003532444s
addons_test.go:614: (dbg) Run:  kubectl --context addons-891209 delete pod task-pv-pod-restore
addons_test.go:614: (dbg) Done: kubectl --context addons-891209 delete pod task-pv-pod-restore: (1.403806338s)
addons_test.go:618: (dbg) Run:  kubectl --context addons-891209 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-891209 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-891209 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-891209 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-891209 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.814360213s)
--- PASS: TestAddons/parallel/CSI (45.52s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-891209 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-891209 --alsologtostderr -v=1: (1.101052369s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-nljrt" [537918b0-834b-4d70-8899-b6f4991c9d8d] Pending
helpers_test.go:352: "headlamp-6945c6f4d-nljrt" [537918b0-834b-4d70-8899-b6f4991c9d8d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-nljrt" [537918b0-834b-4d70-8899-b6f4991c9d8d] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.005161099s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-891209 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-891209 addons disable headlamp --alsologtostderr -v=1: (5.79420281s)
--- PASS: TestAddons/parallel/Headlamp (17.90s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-6xgc4" [02d12c64-7ed4-4659-8196-d1e99940916d] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005098633s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-891209 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.67s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.93s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-891209 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-891209 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891209 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891209 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891209 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891209 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891209 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-891209 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [b2953fc3-db28-4b32-86af-92abbcc7fd2b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [b2953fc3-db28-4b32-86af-92abbcc7fd2b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [b2953fc3-db28-4b32-86af-92abbcc7fd2b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003195815s
addons_test.go:967: (dbg) Run:  kubectl --context addons-891209 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-891209 ssh "cat /opt/local-path-provisioner/pvc-60c0ace7-b795-4080-9d88-340f76370ee5_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-891209 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-891209 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-891209 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.93s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7.03s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-lgsbj" [1adee75e-c65e-44c0-ba57-e05f744d05b7] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.010883498s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-891209 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-891209 addons disable nvidia-device-plugin --alsologtostderr -v=1: (1.014409095s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.03s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-bx6jx" [fa19e4fe-4441-4aa5-a42b-271f977ab853] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00338702s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-891209 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-891209 addons disable yakd --alsologtostderr -v=1: (5.85355011s)
--- PASS: TestAddons/parallel/Yakd (10.86s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.37s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-891209
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-891209: (12.081726189s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-891209
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-891209
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-891209
--- PASS: TestAddons/StoppedEnableDisable (12.37s)

                                                
                                    
x
+
TestCertOptions (34.85s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-035007 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-035007 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (32.015861044s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-035007 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-035007 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-035007 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-035007" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-035007
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-035007: (2.078458202s)
--- PASS: TestCertOptions (34.85s)

                                                
                                    
x
+
TestCertExpiration (230.89s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-184410 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-184410 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (40.136850939s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-184410 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-184410 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (8.254913248s)
helpers_test.go:175: Cleaning up "cert-expiration-184410" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-184410
E1121 14:46:07.267023 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/functional-907462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-184410: (2.492034157s)
--- PASS: TestCertExpiration (230.89s)

                                                
                                    
x
+
TestForceSystemdFlag (45.3s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-826524 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-826524 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (41.316900926s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-826524 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-826524" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-826524
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-826524: (3.554800988s)
--- PASS: TestForceSystemdFlag (45.30s)

                                                
                                    
x
+
TestForceSystemdEnv (48.53s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-041746 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-041746 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (45.744336016s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-041746 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-041746" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-041746
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-041746: (2.390739853s)
--- PASS: TestForceSystemdEnv (48.53s)

                                                
                                    
x
+
TestDockerEnvContainerd (49.1s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-715738 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-715738 --driver=docker  --container-runtime=containerd: (33.183472775s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-715738"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-715738": (1.076615238s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-KUdiJ59LmvmT/agent.2654871" SSH_AGENT_PID="2654872" DOCKER_HOST=ssh://docker@127.0.0.1:36440 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-KUdiJ59LmvmT/agent.2654871" SSH_AGENT_PID="2654872" DOCKER_HOST=ssh://docker@127.0.0.1:36440 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-KUdiJ59LmvmT/agent.2654871" SSH_AGENT_PID="2654872" DOCKER_HOST=ssh://docker@127.0.0.1:36440 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.231381031s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-KUdiJ59LmvmT/agent.2654871" SSH_AGENT_PID="2654872" DOCKER_HOST=ssh://docker@127.0.0.1:36440 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-715738" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-715738
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-715738: (2.118288093s)
--- PASS: TestDockerEnvContainerd (49.10s)

                                                
                                    
x
+
TestErrorSpam/setup (33.25s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-589410 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-589410 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-589410 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-589410 --driver=docker  --container-runtime=containerd: (33.246584462s)
--- PASS: TestErrorSpam/setup (33.25s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-589410 --log_dir /tmp/nospam-589410 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-589410 --log_dir /tmp/nospam-589410 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-589410 --log_dir /tmp/nospam-589410 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-589410 --log_dir /tmp/nospam-589410 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-589410 --log_dir /tmp/nospam-589410 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-589410 --log_dir /tmp/nospam-589410 status
--- PASS: TestErrorSpam/status (1.14s)

                                                
                                    
x
+
TestErrorSpam/pause (1.71s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-589410 --log_dir /tmp/nospam-589410 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-589410 --log_dir /tmp/nospam-589410 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-589410 --log_dir /tmp/nospam-589410 pause
--- PASS: TestErrorSpam/pause (1.71s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.82s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-589410 --log_dir /tmp/nospam-589410 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-589410 --log_dir /tmp/nospam-589410 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-589410 --log_dir /tmp/nospam-589410 unpause
--- PASS: TestErrorSpam/unpause (1.82s)

                                                
                                    
x
+
TestErrorSpam/stop (1.6s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-589410 --log_dir /tmp/nospam-589410 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-589410 --log_dir /tmp/nospam-589410 stop: (1.39725084s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-589410 --log_dir /tmp/nospam-589410 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-589410 --log_dir /tmp/nospam-589410 stop
--- PASS: TestErrorSpam/stop (1.60s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21847-2633933/.minikube/files/etc/test/nested/copy/2635785/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.97s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-907462 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1121 14:04:30.356627 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/addons-891209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:04:30.363747 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/addons-891209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:04:30.375222 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/addons-891209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:04:30.396789 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/addons-891209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:04:30.438247 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/addons-891209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:04:30.519689 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/addons-891209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:04:30.681224 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/addons-891209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:04:31.003048 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/addons-891209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:04:31.645195 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/addons-891209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:04:32.926632 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/addons-891209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:04:35.488102 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/addons-891209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:04:40.610294 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/addons-891209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:04:50.851948 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/addons-891209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-907462 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m17.965902509s)
--- PASS: TestFunctional/serial/StartWithProxy (77.97s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.55s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1121 14:05:00.491735 2635785 config.go:182] Loaded profile config "functional-907462": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-907462 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-907462 --alsologtostderr -v=8: (7.547372079s)
functional_test.go:678: soft start took 7.550696783s for "functional-907462" cluster.
I1121 14:05:08.039602 2635785 config.go:182] Loaded profile config "functional-907462": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (7.55s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-907462 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-907462 cache add registry.k8s.io/pause:3.1: (1.286949953s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-907462 cache add registry.k8s.io/pause:3.3: (1.121119388s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 cache add registry.k8s.io/pause:latest
E1121 14:05:11.334363 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/addons-891209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-907462 cache add registry.k8s.io/pause:latest: (1.092301119s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-907462 /tmp/TestFunctionalserialCacheCmdcacheadd_local202697629/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 cache add minikube-local-cache-test:functional-907462
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 cache delete minikube-local-cache-test:functional-907462
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-907462
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-907462 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (286.713666ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 kubectl -- --context functional-907462 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-907462 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.95s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-907462 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1121 14:05:52.295752 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/addons-891209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-907462 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.950808156s)
functional_test.go:776: restart took 41.950911529s for "functional-907462" cluster.
I1121 14:05:57.571025 2635785 config.go:182] Loaded profile config "functional-907462": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (41.95s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-907462 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-907462 logs: (1.491157851s)
--- PASS: TestFunctional/serial/LogsCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 logs --file /tmp/TestFunctionalserialLogsFileCmd2843273721/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-907462 logs --file /tmp/TestFunctionalserialLogsFileCmd2843273721/001/logs.txt: (1.83807s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.84s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.32s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-907462 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-907462
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-907462: exit status 115 (423.968972ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31944 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-907462 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.32s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-907462 config get cpus: exit status 14 (71.900123ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-907462 config get cpus: exit status 14 (86.90321ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-907462 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-907462 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 2670104: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.89s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-907462 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-907462 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (204.912356ms)

                                                
                                                
-- stdout --
	* [functional-907462] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21847-2633933/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-2633933/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:06:38.083292 2669773 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:06:38.083493 2669773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:06:38.083521 2669773 out.go:374] Setting ErrFile to fd 2...
	I1121 14:06:38.083544 2669773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:06:38.083889 2669773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-2633933/.minikube/bin
	I1121 14:06:38.084340 2669773 out.go:368] Setting JSON to false
	I1121 14:06:38.085493 2669773 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":67746,"bootTime":1763666252,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1121 14:06:38.085600 2669773 start.go:143] virtualization:  
	I1121 14:06:38.088830 2669773 out.go:179] * [functional-907462] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 14:06:38.092559 2669773 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:06:38.092710 2669773 notify.go:221] Checking for updates...
	I1121 14:06:38.098161 2669773 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:06:38.101151 2669773 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-2633933/kubeconfig
	I1121 14:06:38.104135 2669773 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-2633933/.minikube
	I1121 14:06:38.107296 2669773 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 14:06:38.110169 2669773 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:06:38.113518 2669773 config.go:182] Loaded profile config "functional-907462": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:06:38.114148 2669773 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:06:38.145248 2669773 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 14:06:38.145355 2669773 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:06:38.217591 2669773 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-21 14:06:38.201093521 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:06:38.217705 2669773 docker.go:319] overlay module found
	I1121 14:06:38.220806 2669773 out.go:179] * Using the docker driver based on existing profile
	I1121 14:06:38.223645 2669773 start.go:309] selected driver: docker
	I1121 14:06:38.223664 2669773 start.go:930] validating driver "docker" against &{Name:functional-907462 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-907462 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:06:38.223771 2669773 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:06:38.227306 2669773 out.go:203] 
	W1121 14:06:38.230209 2669773 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1121 14:06:38.233068 2669773 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-907462 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-907462 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-907462 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (256.5638ms)

                                                
                                                
-- stdout --
	* [functional-907462] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21847-2633933/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-2633933/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:06:37.843993 2669654 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:06:37.844207 2669654 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:06:37.844243 2669654 out.go:374] Setting ErrFile to fd 2...
	I1121 14:06:37.844264 2669654 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:06:37.845484 2669654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-2633933/.minikube/bin
	I1121 14:06:37.845926 2669654 out.go:368] Setting JSON to false
	I1121 14:06:37.846933 2669654 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":67746,"bootTime":1763666252,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1121 14:06:37.847031 2669654 start.go:143] virtualization:  
	I1121 14:06:37.850925 2669654 out.go:179] * [functional-907462] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1121 14:06:37.854659 2669654 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:06:37.854817 2669654 notify.go:221] Checking for updates...
	I1121 14:06:37.862897 2669654 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:06:37.865812 2669654 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-2633933/kubeconfig
	I1121 14:06:37.868826 2669654 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-2633933/.minikube
	I1121 14:06:37.871689 2669654 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 14:06:37.874496 2669654 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:06:37.878017 2669654 config.go:182] Loaded profile config "functional-907462": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:06:37.878581 2669654 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:06:37.923170 2669654 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 14:06:37.923283 2669654 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:06:38.010236 2669654 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-21 14:06:37.997779585 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:06:38.010364 2669654 docker.go:319] overlay module found
	I1121 14:06:38.013856 2669654 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1121 14:06:38.016273 2669654 start.go:309] selected driver: docker
	I1121 14:06:38.016301 2669654 start.go:930] validating driver "docker" against &{Name:functional-907462 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-907462 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:06:38.016427 2669654 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:06:38.022282 2669654 out.go:203] 
	W1121 14:06:38.025235 2669654 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1121 14:06:38.028200 2669654 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-907462 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-907462 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-7zkkk" [c437f6ed-a849-4954-9525-6998f39dce89] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-7zkkk" [c437f6ed-a849-4954-9525-6998f39dce89] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.004142733s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32353
functional_test.go:1680: http://192.168.49.2:32353: success! body:
Request served by hello-node-connect-7d85dfc575-7zkkk

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32353
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.61s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [ec08ae77-719a-4b70-88d9-8c5bab5449db] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00405278s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-907462 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-907462 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-907462 get pvc myclaim -o=json
I1121 14:06:14.183263 2635785 retry.go:31] will retry after 2.913426287s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:051a8661-594c-41d8-aad4-f8f6b4865c83 ResourceVersion:634 Generation:0 CreationTimestamp:2025-11-21 14:06:14 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0x40015cd560 VolumeMode:0x40015cd570 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-907462 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-907462 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [1b5f447a-1488-4bcb-8003-8ced92ccabb3] Pending
helpers_test.go:352: "sp-pod" [1b5f447a-1488-4bcb-8003-8ced92ccabb3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [1b5f447a-1488-4bcb-8003-8ced92ccabb3] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004424737s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-907462 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-907462 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-907462 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [64425c07-6fde-407b-91b2-e44b969814c8] Pending
helpers_test.go:352: "sp-pod" [64425c07-6fde-407b-91b2-e44b969814c8] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.00511074s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-907462 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.90s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh -n functional-907462 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 cp functional-907462:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd630158239/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh -n functional-907462 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh -n functional-907462 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.52s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/2635785/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh "sudo cat /etc/test/nested/copy/2635785/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/2635785.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh "sudo cat /etc/ssl/certs/2635785.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/2635785.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh "sudo cat /usr/share/ca-certificates/2635785.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/26357852.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh "sudo cat /etc/ssl/certs/26357852.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/26357852.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh "sudo cat /usr/share/ca-certificates/26357852.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-907462 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-907462 ssh "sudo systemctl is-active docker": exit status 1 (309.613374ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-907462 ssh "sudo systemctl is-active crio": exit status 1 (373.878361ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-907462 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-907462 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-907462 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-907462 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 2667303: os: process already finished
helpers_test.go:525: unable to kill pid 2667104: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-907462 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-907462 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [7635c4db-7dce-4b50-90a6-9971f485412e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [7635c4db-7dce-4b50-90a6-9971f485412e] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003590828s
I1121 14:06:15.663833 2635785 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-907462 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.17.107 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-907462 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-907462 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-907462 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-8cvwx" [ef333c0d-73a5-4703-8794-16fb0a30e9e7] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-8cvwx" [ef333c0d-73a5-4703-8794-16fb0a30e9e7] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003936262s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 service list -o json
functional_test.go:1504: Took "518.208948ms" to run "out/minikube-linux-arm64 -p functional-907462 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32231
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32231
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-907462 /tmp/TestFunctionalparallelMountCmdany-port2950237340/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763733995197248981" to /tmp/TestFunctionalparallelMountCmdany-port2950237340/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763733995197248981" to /tmp/TestFunctionalparallelMountCmdany-port2950237340/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763733995197248981" to /tmp/TestFunctionalparallelMountCmdany-port2950237340/001/test-1763733995197248981
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-907462 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (490.208313ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1121 14:06:35.687819 2635785 retry.go:31] will retry after 417.635439ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 21 14:06 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 21 14:06 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 21 14:06 test-1763733995197248981
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh cat /mount-9p/test-1763733995197248981
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-907462 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [d2787be4-bee6-4be4-bf88-4e80c40af757] Pending
helpers_test.go:352: "busybox-mount" [d2787be4-bee6-4be4-bf88-4e80c40af757] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [d2787be4-bee6-4be4-bf88-4e80c40af757] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [d2787be4-bee6-4be4-bf88-4e80c40af757] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003739533s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-907462 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-907462 /tmp/TestFunctionalparallelMountCmdany-port2950237340/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.62s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "547.505469ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "61.145939ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "427.218317ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "60.57538ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-907462 /tmp/TestFunctionalparallelMountCmdspecific-port3142449644/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-907462 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (583.769871ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1121 14:06:44.397831 2635785 retry.go:31] will retry after 485.477124ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-907462 /tmp/TestFunctionalparallelMountCmdspecific-port3142449644/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-907462 ssh "sudo umount -f /mount-9p": exit status 1 (275.035142ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-907462 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-907462 /tmp/TestFunctionalparallelMountCmdspecific-port3142449644/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
2025/11/21 14:06:46 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-907462 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2217380578/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-907462 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2217380578/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-907462 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2217380578/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-907462 ssh "findmnt -T" /mount1: exit status 1 (661.21521ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1121 14:06:46.885313 2635785 retry.go:31] will retry after 404.426691ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-907462 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-907462 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2217380578/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-907462 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2217380578/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-907462 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2217380578/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-907462 version -o=json --components: (1.348643756s)
--- PASS: TestFunctional/parallel/Version/components (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-907462 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-907462
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-907462
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-907462 image ls --format short --alsologtostderr:
I1121 14:06:54.710613 2672886 out.go:360] Setting OutFile to fd 1 ...
I1121 14:06:54.713058 2672886 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:06:54.713113 2672886 out.go:374] Setting ErrFile to fd 2...
I1121 14:06:54.713136 2672886 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:06:54.713462 2672886 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-2633933/.minikube/bin
I1121 14:06:54.714203 2672886 config.go:182] Loaded profile config "functional-907462": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 14:06:54.714381 2672886 config.go:182] Loaded profile config "functional-907462": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 14:06:54.714941 2672886 cli_runner.go:164] Run: docker container inspect functional-907462 --format={{.State.Status}}
I1121 14:06:54.745228 2672886 ssh_runner.go:195] Run: systemctl --version
I1121 14:06:54.745278 2672886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-907462
I1121 14:06:54.763397 2672886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36450 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/functional-907462/id_rsa Username:docker}
I1121 14:06:54.868042 2672886 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-907462 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                     │ latest             │ sha256:bb747c │ 58.3MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:a18947 │ 98.2MB │
│ docker.io/kicbase/echo-server               │ functional-907462  │ sha256:ce2d2c │ 2.17MB │
│ docker.io/library/minikube-local-cache-test │ functional-907462  │ sha256:ef3d4a │ 989B   │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:43911e │ 24.6MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:7eb2c6 │ 20.7MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:b5f57e │ 15.8MB │
│ docker.io/library/nginx                     │ alpine             │ sha256:cbad63 │ 23.1MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:05baa9 │ 22.8MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-907462 image ls --format table --alsologtostderr:
I1121 14:06:55.026845 2672960 out.go:360] Setting OutFile to fd 1 ...
I1121 14:06:55.027043 2672960 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:06:55.027067 2672960 out.go:374] Setting ErrFile to fd 2...
I1121 14:06:55.027087 2672960 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:06:55.027396 2672960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-2633933/.minikube/bin
I1121 14:06:55.028113 2672960 config.go:182] Loaded profile config "functional-907462": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 14:06:55.028281 2672960 config.go:182] Loaded profile config "functional-907462": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 14:06:55.028807 2672960 cli_runner.go:164] Run: docker container inspect functional-907462 --format={{.State.Status}}
I1121 14:06:55.053786 2672960 ssh_runner.go:195] Run: systemctl --version
I1121 14:06:55.053847 2672960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-907462
I1121 14:06:55.076074 2672960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36450 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/functional-907462/id_rsa Username:docker}
I1121 14:06:55.188727 2672960 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-907462 image ls --format json --alsologtostderr:
[{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:ef3d4a58be960a328d71f8f9c597e0c9db1a22a66e6634a337797af72d140615","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-907462"],"size":"989"},{"id":"sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"23117513"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a
5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"98207481"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-907462"],"size":"2173567"},{"id":"sha256:bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42"],"repoTags":["docker.io/library/nginx:latest"],"size":"5826354
8"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"24571109"},{"id":"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"20720058"},{"id":"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1
ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"22788047"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"15779817"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3
babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-907462 image ls --format json --alsologtostderr:
I1121 14:06:54.989632 2672956 out.go:360] Setting OutFile to fd 1 ...
I1121 14:06:54.990284 2672956 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:06:54.990296 2672956 out.go:374] Setting ErrFile to fd 2...
I1121 14:06:54.990301 2672956 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:06:54.990641 2672956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-2633933/.minikube/bin
I1121 14:06:54.991397 2672956 config.go:182] Loaded profile config "functional-907462": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 14:06:54.991500 2672956 config.go:182] Loaded profile config "functional-907462": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 14:06:54.991947 2672956 cli_runner.go:164] Run: docker container inspect functional-907462 --format={{.State.Status}}
I1121 14:06:55.026186 2672956 ssh_runner.go:195] Run: systemctl --version
I1121 14:06:55.026288 2672956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-907462
I1121 14:06:55.055493 2672956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36450 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/functional-907462/id_rsa Username:docker}
I1121 14:06:55.160186 2672956 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-907462 image ls --format yaml --alsologtostderr:
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "98207481"
- id: sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "15779817"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
repoTags:
- docker.io/library/nginx:latest
size: "58263548"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-907462
size: "2173567"
- id: sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "23117513"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "24571109"
- id: sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "20720058"
- id: sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "22788047"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:ef3d4a58be960a328d71f8f9c597e0c9db1a22a66e6634a337797af72d140615
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-907462
size: "989"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-907462 image ls --format yaml --alsologtostderr:
I1121 14:06:54.719928 2672885 out.go:360] Setting OutFile to fd 1 ...
I1121 14:06:54.720094 2672885 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:06:54.720106 2672885 out.go:374] Setting ErrFile to fd 2...
I1121 14:06:54.720113 2672885 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:06:54.720385 2672885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-2633933/.minikube/bin
I1121 14:06:54.721012 2672885 config.go:182] Loaded profile config "functional-907462": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 14:06:54.721168 2672885 config.go:182] Loaded profile config "functional-907462": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 14:06:54.721660 2672885 cli_runner.go:164] Run: docker container inspect functional-907462 --format={{.State.Status}}
I1121 14:06:54.747417 2672885 ssh_runner.go:195] Run: systemctl --version
I1121 14:06:54.747471 2672885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-907462
I1121 14:06:54.773177 2672885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36450 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/functional-907462/id_rsa Username:docker}
I1121 14:06:54.879862 2672885 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-907462 ssh pgrep buildkitd: exit status 1 (278.062976ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 image build -t localhost/my-image:functional-907462 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-907462 image build -t localhost/my-image:functional-907462 testdata/build --alsologtostderr: (3.377635321s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-907462 image build -t localhost/my-image:functional-907462 testdata/build --alsologtostderr:
I1121 14:06:55.534129 2673094 out.go:360] Setting OutFile to fd 1 ...
I1121 14:06:55.535502 2673094 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:06:55.535547 2673094 out.go:374] Setting ErrFile to fd 2...
I1121 14:06:55.535572 2673094 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:06:55.535872 2673094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-2633933/.minikube/bin
I1121 14:06:55.536535 2673094 config.go:182] Loaded profile config "functional-907462": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 14:06:55.538922 2673094 config.go:182] Loaded profile config "functional-907462": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 14:06:55.539472 2673094 cli_runner.go:164] Run: docker container inspect functional-907462 --format={{.State.Status}}
I1121 14:06:55.558408 2673094 ssh_runner.go:195] Run: systemctl --version
I1121 14:06:55.558466 2673094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-907462
I1121 14:06:55.576743 2673094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36450 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/functional-907462/id_rsa Username:docker}
I1121 14:06:55.675576 2673094 build_images.go:162] Building image from path: /tmp/build.1761143247.tar
I1121 14:06:55.675647 2673094 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1121 14:06:55.683670 2673094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1761143247.tar
I1121 14:06:55.687438 2673094 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1761143247.tar: stat -c "%s %y" /var/lib/minikube/build/build.1761143247.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1761143247.tar': No such file or directory
I1121 14:06:55.687469 2673094 ssh_runner.go:362] scp /tmp/build.1761143247.tar --> /var/lib/minikube/build/build.1761143247.tar (3072 bytes)
I1121 14:06:55.705764 2673094 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1761143247
I1121 14:06:55.713940 2673094 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1761143247 -xf /var/lib/minikube/build/build.1761143247.tar
I1121 14:06:55.722265 2673094 containerd.go:394] Building image: /var/lib/minikube/build/build.1761143247
I1121 14:06:55.722357 2673094 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1761143247 --local dockerfile=/var/lib/minikube/build/build.1761143247 --output type=image,name=localhost/my-image:functional-907462
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:865871f58a611593e82072f9c1acfbb7369b0c6b9bdecf841a30b31927d700a6 0.0s done
#8 exporting config sha256:6fccac2054dff3af0b233264dfb211d9d9c4d2dded0a98854e5842011fa4c5ba 0.0s done
#8 naming to localhost/my-image:functional-907462 done
#8 DONE 0.2s
I1121 14:06:58.828968 2673094 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1761143247 --local dockerfile=/var/lib/minikube/build/build.1761143247 --output type=image,name=localhost/my-image:functional-907462: (3.106579519s)
I1121 14:06:58.829061 2673094 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1761143247
I1121 14:06:58.837912 2673094 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1761143247.tar
I1121 14:06:58.846798 2673094 build_images.go:218] Built localhost/my-image:functional-907462 from /tmp/build.1761143247.tar
I1121 14:06:58.846829 2673094 build_images.go:134] succeeded building to: functional-907462
I1121 14:06:58.846834 2673094 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-907462
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 image load --daemon kicbase/echo-server:functional-907462 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-907462 image load --daemon kicbase/echo-server:functional-907462 --alsologtostderr: (1.123041039s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 image load --daemon kicbase/echo-server:functional-907462 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-907462 image load --daemon kicbase/echo-server:functional-907462 --alsologtostderr: (1.010246437s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-907462
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 image load --daemon kicbase/echo-server:functional-907462 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 image save kicbase/echo-server:functional-907462 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 image rm kicbase/echo-server:functional-907462 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-907462
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-907462 image save --daemon kicbase/echo-server:functional-907462 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-907462
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-907462
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-907462
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-907462
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (207.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1121 14:07:14.217762 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/addons-891209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:09:30.354291 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/addons-891209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:09:58.059130 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/addons-891209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-956728 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (3m27.008773255s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (207.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-956728 kubectl -- rollout status deployment/busybox: (4.741024804s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 kubectl -- exec busybox-7b57f96db7-lq7nj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 kubectl -- exec busybox-7b57f96db7-mjl49 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 kubectl -- exec busybox-7b57f96db7-vpghz -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 kubectl -- exec busybox-7b57f96db7-lq7nj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 kubectl -- exec busybox-7b57f96db7-mjl49 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 kubectl -- exec busybox-7b57f96db7-vpghz -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 kubectl -- exec busybox-7b57f96db7-lq7nj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 kubectl -- exec busybox-7b57f96db7-mjl49 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 kubectl -- exec busybox-7b57f96db7-vpghz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 kubectl -- exec busybox-7b57f96db7-lq7nj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 kubectl -- exec busybox-7b57f96db7-lq7nj -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 kubectl -- exec busybox-7b57f96db7-mjl49 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 kubectl -- exec busybox-7b57f96db7-mjl49 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 kubectl -- exec busybox-7b57f96db7-vpghz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 kubectl -- exec busybox-7b57f96db7-vpghz -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 node add --alsologtostderr -v 5
E1121 14:11:07.266808 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/functional-907462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:11:07.273318 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/functional-907462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:11:07.284703 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/functional-907462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:11:07.306151 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/functional-907462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:11:07.347495 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/functional-907462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:11:07.428943 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/functional-907462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:11:07.590594 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/functional-907462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:11:07.912293 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/functional-907462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:11:08.554320 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/functional-907462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:11:09.836126 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/functional-907462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:11:12.398910 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/functional-907462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:11:17.520758 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/functional-907462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:11:27.762747 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/functional-907462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-956728 node add --alsologtostderr -v 5: (59.322400741s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-956728 status --alsologtostderr -v 5: (1.058807736s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-956728 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.067253218s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-956728 status --output json --alsologtostderr -v 5: (1.203500478s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 cp testdata/cp-test.txt ha-956728:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 cp ha-956728:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1727953062/001/cp-test_ha-956728.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 cp ha-956728:/home/docker/cp-test.txt ha-956728-m02:/home/docker/cp-test_ha-956728_ha-956728-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728-m02 "sudo cat /home/docker/cp-test_ha-956728_ha-956728-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 cp ha-956728:/home/docker/cp-test.txt ha-956728-m03:/home/docker/cp-test_ha-956728_ha-956728-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728-m03 "sudo cat /home/docker/cp-test_ha-956728_ha-956728-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 cp ha-956728:/home/docker/cp-test.txt ha-956728-m04:/home/docker/cp-test_ha-956728_ha-956728-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728-m04 "sudo cat /home/docker/cp-test_ha-956728_ha-956728-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 cp testdata/cp-test.txt ha-956728-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 cp ha-956728-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1727953062/001/cp-test_ha-956728-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728-m02 "sudo cat /home/docker/cp-test.txt"
E1121 14:11:48.245252 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/functional-907462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 cp ha-956728-m02:/home/docker/cp-test.txt ha-956728:/home/docker/cp-test_ha-956728-m02_ha-956728.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728 "sudo cat /home/docker/cp-test_ha-956728-m02_ha-956728.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 cp ha-956728-m02:/home/docker/cp-test.txt ha-956728-m03:/home/docker/cp-test_ha-956728-m02_ha-956728-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728-m03 "sudo cat /home/docker/cp-test_ha-956728-m02_ha-956728-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 cp ha-956728-m02:/home/docker/cp-test.txt ha-956728-m04:/home/docker/cp-test_ha-956728-m02_ha-956728-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728-m04 "sudo cat /home/docker/cp-test_ha-956728-m02_ha-956728-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 cp testdata/cp-test.txt ha-956728-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 cp ha-956728-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1727953062/001/cp-test_ha-956728-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 cp ha-956728-m03:/home/docker/cp-test.txt ha-956728:/home/docker/cp-test_ha-956728-m03_ha-956728.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728 "sudo cat /home/docker/cp-test_ha-956728-m03_ha-956728.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 cp ha-956728-m03:/home/docker/cp-test.txt ha-956728-m02:/home/docker/cp-test_ha-956728-m03_ha-956728-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728-m02 "sudo cat /home/docker/cp-test_ha-956728-m03_ha-956728-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 cp ha-956728-m03:/home/docker/cp-test.txt ha-956728-m04:/home/docker/cp-test_ha-956728-m03_ha-956728-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728-m04 "sudo cat /home/docker/cp-test_ha-956728-m03_ha-956728-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 cp testdata/cp-test.txt ha-956728-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 cp ha-956728-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1727953062/001/cp-test_ha-956728-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 cp ha-956728-m04:/home/docker/cp-test.txt ha-956728:/home/docker/cp-test_ha-956728-m04_ha-956728.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728 "sudo cat /home/docker/cp-test_ha-956728-m04_ha-956728.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 cp ha-956728-m04:/home/docker/cp-test.txt ha-956728-m02:/home/docker/cp-test_ha-956728-m04_ha-956728-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728-m02 "sudo cat /home/docker/cp-test_ha-956728-m04_ha-956728-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 cp ha-956728-m04:/home/docker/cp-test.txt ha-956728-m03:/home/docker/cp-test_ha-956728-m04_ha-956728-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 ssh -n ha-956728-m03 "sudo cat /home/docker/cp-test_ha-956728-m04_ha-956728-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-956728 node stop m02 --alsologtostderr -v 5: (12.205538313s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-956728 status --alsologtostderr -v 5: exit status 7 (802.130143ms)

                                                
                                                
-- stdout --
	ha-956728
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-956728-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-956728-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-956728-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:12:13.889285 2689558 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:12:13.889494 2689558 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:12:13.889526 2689558 out.go:374] Setting ErrFile to fd 2...
	I1121 14:12:13.889547 2689558 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:12:13.889874 2689558 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-2633933/.minikube/bin
	I1121 14:12:13.890102 2689558 out.go:368] Setting JSON to false
	I1121 14:12:13.890164 2689558 mustload.go:66] Loading cluster: ha-956728
	I1121 14:12:13.890244 2689558 notify.go:221] Checking for updates...
	I1121 14:12:13.890624 2689558 config.go:182] Loaded profile config "ha-956728": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:12:13.890660 2689558 status.go:174] checking status of ha-956728 ...
	I1121 14:12:13.891265 2689558 cli_runner.go:164] Run: docker container inspect ha-956728 --format={{.State.Status}}
	I1121 14:12:13.913600 2689558 status.go:371] ha-956728 host status = "Running" (err=<nil>)
	I1121 14:12:13.913630 2689558 host.go:66] Checking if "ha-956728" exists ...
	I1121 14:12:13.913936 2689558 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-956728
	I1121 14:12:13.938553 2689558 host.go:66] Checking if "ha-956728" exists ...
	I1121 14:12:13.938982 2689558 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:12:13.939072 2689558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-956728
	I1121 14:12:13.967244 2689558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36455 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/ha-956728/id_rsa Username:docker}
	I1121 14:12:14.080446 2689558 ssh_runner.go:195] Run: systemctl --version
	I1121 14:12:14.087506 2689558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:12:14.109742 2689558 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:12:14.182247 2689558 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-21 14:12:14.172706964 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:12:14.182802 2689558 kubeconfig.go:125] found "ha-956728" server: "https://192.168.49.254:8443"
	I1121 14:12:14.182849 2689558 api_server.go:166] Checking apiserver status ...
	I1121 14:12:14.182898 2689558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:12:14.197096 2689558 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1465/cgroup
	I1121 14:12:14.206388 2689558 api_server.go:182] apiserver freezer: "7:freezer:/docker/34e2647e179ed8af9a012b69591074947fa663b88924600f9c8f6f923ede29dd/kubepods/burstable/pod3877fa2c72dfa0587708d01403002aaf/1ae442b4a734ed361038a0cabf79ae5fabe116d600f8194edc8f831c6f6edab7"
	I1121 14:12:14.206461 2689558 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/34e2647e179ed8af9a012b69591074947fa663b88924600f9c8f6f923ede29dd/kubepods/burstable/pod3877fa2c72dfa0587708d01403002aaf/1ae442b4a734ed361038a0cabf79ae5fabe116d600f8194edc8f831c6f6edab7/freezer.state
	I1121 14:12:14.215367 2689558 api_server.go:204] freezer state: "THAWED"
	I1121 14:12:14.215398 2689558 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1121 14:12:14.223818 2689558 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1121 14:12:14.223853 2689558 status.go:463] ha-956728 apiserver status = Running (err=<nil>)
	I1121 14:12:14.223867 2689558 status.go:176] ha-956728 status: &{Name:ha-956728 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:12:14.223886 2689558 status.go:174] checking status of ha-956728-m02 ...
	I1121 14:12:14.224226 2689558 cli_runner.go:164] Run: docker container inspect ha-956728-m02 --format={{.State.Status}}
	I1121 14:12:14.244101 2689558 status.go:371] ha-956728-m02 host status = "Stopped" (err=<nil>)
	I1121 14:12:14.244126 2689558 status.go:384] host is not running, skipping remaining checks
	I1121 14:12:14.244140 2689558 status.go:176] ha-956728-m02 status: &{Name:ha-956728-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:12:14.244162 2689558 status.go:174] checking status of ha-956728-m03 ...
	I1121 14:12:14.244486 2689558 cli_runner.go:164] Run: docker container inspect ha-956728-m03 --format={{.State.Status}}
	I1121 14:12:14.264023 2689558 status.go:371] ha-956728-m03 host status = "Running" (err=<nil>)
	I1121 14:12:14.264062 2689558 host.go:66] Checking if "ha-956728-m03" exists ...
	I1121 14:12:14.264371 2689558 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-956728-m03
	I1121 14:12:14.284488 2689558 host.go:66] Checking if "ha-956728-m03" exists ...
	I1121 14:12:14.284815 2689558 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:12:14.284859 2689558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-956728-m03
	I1121 14:12:14.303588 2689558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36465 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/ha-956728-m03/id_rsa Username:docker}
	I1121 14:12:14.402859 2689558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:12:14.418201 2689558 kubeconfig.go:125] found "ha-956728" server: "https://192.168.49.254:8443"
	I1121 14:12:14.418231 2689558 api_server.go:166] Checking apiserver status ...
	I1121 14:12:14.418283 2689558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:12:14.430538 2689558 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1326/cgroup
	I1121 14:12:14.439659 2689558 api_server.go:182] apiserver freezer: "7:freezer:/docker/8e25653c6a18013e3c32bd1a65b1f38a1cc423e67ef6088f2cb545629d4a58b9/kubepods/burstable/pod16007c8c8ce14bf7c9fa033e121a5315/575e68004d5cd7cb4b31bc4969f6e3d6b6fa7cbeca7df47922f22d84c0fe60dc"
	I1121 14:12:14.439730 2689558 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8e25653c6a18013e3c32bd1a65b1f38a1cc423e67ef6088f2cb545629d4a58b9/kubepods/burstable/pod16007c8c8ce14bf7c9fa033e121a5315/575e68004d5cd7cb4b31bc4969f6e3d6b6fa7cbeca7df47922f22d84c0fe60dc/freezer.state
	I1121 14:12:14.447981 2689558 api_server.go:204] freezer state: "THAWED"
	I1121 14:12:14.448058 2689558 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1121 14:12:14.456458 2689558 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1121 14:12:14.456543 2689558 status.go:463] ha-956728-m03 apiserver status = Running (err=<nil>)
	I1121 14:12:14.456559 2689558 status.go:176] ha-956728-m03 status: &{Name:ha-956728-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:12:14.456578 2689558 status.go:174] checking status of ha-956728-m04 ...
	I1121 14:12:14.456887 2689558 cli_runner.go:164] Run: docker container inspect ha-956728-m04 --format={{.State.Status}}
	I1121 14:12:14.474469 2689558 status.go:371] ha-956728-m04 host status = "Running" (err=<nil>)
	I1121 14:12:14.474510 2689558 host.go:66] Checking if "ha-956728-m04" exists ...
	I1121 14:12:14.474822 2689558 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-956728-m04
	I1121 14:12:14.491766 2689558 host.go:66] Checking if "ha-956728-m04" exists ...
	I1121 14:12:14.492078 2689558 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:12:14.492115 2689558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-956728-m04
	I1121 14:12:14.520033 2689558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36470 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/ha-956728-m04/id_rsa Username:docker}
	I1121 14:12:14.622989 2689558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:12:14.637440 2689558 status.go:176] ha-956728-m04 status: &{Name:ha-956728-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (15.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 node start m02 --alsologtostderr -v 5
E1121 14:12:29.207212 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/functional-907462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-956728 node start m02 --alsologtostderr -v 5: (14.022252097s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-956728 status --alsologtostderr -v 5: (1.395832947s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (15.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.118638922s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (96.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-956728 stop --alsologtostderr -v 5: (37.554601915s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 start --wait true --alsologtostderr -v 5
E1121 14:13:51.128559 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/functional-907462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-956728 start --wait true --alsologtostderr -v 5: (59.070518278s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (96.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-956728 node delete m03 --alsologtostderr -v 5: (9.836560047s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 stop --alsologtostderr -v 5
E1121 14:14:30.355112 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/addons-891209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-956728 stop --alsologtostderr -v 5: (36.286683009s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-956728 status --alsologtostderr -v 5: exit status 7 (126.095354ms)

                                                
                                                
-- stdout --
	ha-956728
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-956728-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-956728-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:14:56.886179 2704288 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:14:56.886310 2704288 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:14:56.886322 2704288 out.go:374] Setting ErrFile to fd 2...
	I1121 14:14:56.886328 2704288 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:14:56.887000 2704288 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-2633933/.minikube/bin
	I1121 14:14:56.887211 2704288 out.go:368] Setting JSON to false
	I1121 14:14:56.887254 2704288 mustload.go:66] Loading cluster: ha-956728
	I1121 14:14:56.887333 2704288 notify.go:221] Checking for updates...
	I1121 14:14:56.888530 2704288 config.go:182] Loaded profile config "ha-956728": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:14:56.888555 2704288 status.go:174] checking status of ha-956728 ...
	I1121 14:14:56.889303 2704288 cli_runner.go:164] Run: docker container inspect ha-956728 --format={{.State.Status}}
	I1121 14:14:56.906173 2704288 status.go:371] ha-956728 host status = "Stopped" (err=<nil>)
	I1121 14:14:56.906198 2704288 status.go:384] host is not running, skipping remaining checks
	I1121 14:14:56.906204 2704288 status.go:176] ha-956728 status: &{Name:ha-956728 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:14:56.906236 2704288 status.go:174] checking status of ha-956728-m02 ...
	I1121 14:14:56.906535 2704288 cli_runner.go:164] Run: docker container inspect ha-956728-m02 --format={{.State.Status}}
	I1121 14:14:56.935998 2704288 status.go:371] ha-956728-m02 host status = "Stopped" (err=<nil>)
	I1121 14:14:56.936023 2704288 status.go:384] host is not running, skipping remaining checks
	I1121 14:14:56.936030 2704288 status.go:176] ha-956728-m02 status: &{Name:ha-956728-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:14:56.936050 2704288 status.go:174] checking status of ha-956728-m04 ...
	I1121 14:14:56.936325 2704288 cli_runner.go:164] Run: docker container inspect ha-956728-m04 --format={{.State.Status}}
	I1121 14:14:56.953290 2704288 status.go:371] ha-956728-m04 host status = "Stopped" (err=<nil>)
	I1121 14:14:56.953317 2704288 status.go:384] host is not running, skipping remaining checks
	I1121 14:14:56.953325 2704288 status.go:176] ha-956728-m04 status: &{Name:ha-956728-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (61.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-956728 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m0.602693433s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (61.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (63.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 node add --control-plane --alsologtostderr -v 5
E1121 14:16:07.266969 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/functional-907462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:16:34.974320 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/functional-907462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-956728 node add --control-plane --alsologtostderr -v 5: (1m2.610012295s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-956728 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-956728 status --alsologtostderr -v 5: (1.317033661s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (63.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.195646299s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.20s)

                                                
                                    
x
+
TestJSONOutput/start/Command (81.42s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-697092 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-697092 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (1m21.411433151s)
--- PASS: TestJSONOutput/start/Command (81.42s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-697092 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-697092 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.02s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-697092 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-697092 --output=json --user=testUser: (6.015694359s)
--- PASS: TestJSONOutput/stop/Command (6.02s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-736625 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-736625 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (94.635556ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"276b15e7-c9dd-478e-8b00-924d5619e63f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-736625] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f0774b21-f81f-4c0b-b1cc-45c53aff4ad8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21847"}}
	{"specversion":"1.0","id":"0dea45f1-af15-40c4-ade7-0541cf3094cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4bf8b5df-401b-410e-9e0f-58ebb62c8c6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21847-2633933/kubeconfig"}}
	{"specversion":"1.0","id":"ab530926-4391-4b98-800b-82a018bd92b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-2633933/.minikube"}}
	{"specversion":"1.0","id":"82136754-ca15-4e1a-a0c8-acd5a829d704","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"53fc8dfa-5335-4fec-a375-7c9b677a05f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"676660bc-17a0-4839-93bf-db4e0f12c628","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-736625" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-736625
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (76.38s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-308281 --network=
E1121 14:19:30.355023 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/addons-891209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-308281 --network=: (1m14.037477163s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-308281" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-308281
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-308281: (2.307977318s)
--- PASS: TestKicCustomNetwork/create_custom_network (76.38s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (41.71s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-933189 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-933189 --network=bridge: (39.547972143s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-933189" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-933189
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-933189: (2.128214296s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (41.71s)

                                                
                                    
x
+
TestKicExistingNetwork (38.51s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1121 14:20:44.712321 2635785 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1121 14:20:44.728605 2635785 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1121 14:20:44.728683 2635785 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1121 14:20:44.728701 2635785 cli_runner.go:164] Run: docker network inspect existing-network
W1121 14:20:44.745364 2635785 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1121 14:20:44.745395 2635785 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1121 14:20:44.745410 2635785 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1121 14:20:44.745516 2635785 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1121 14:20:44.761989 2635785 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c13a3bee40ff IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:9f:8e:c6:2a:d6} reservation:<nil>}
I1121 14:20:44.762293 2635785 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d8d340}
I1121 14:20:44.762318 2635785 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1121 14:20:44.762374 2635785 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1121 14:20:44.821141 2635785 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-166413 --network=existing-network
E1121 14:20:53.420707 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/addons-891209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:21:07.268673 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/functional-907462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-166413 --network=existing-network: (36.223571063s)
helpers_test.go:175: Cleaning up "existing-network-166413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-166413
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-166413: (2.144028812s)
I1121 14:21:23.205571 2635785 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (38.51s)

                                                
                                    
x
+
TestKicCustomSubnet (36.44s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-340831 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-340831 --subnet=192.168.60.0/24: (34.215728953s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-340831 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-340831" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-340831
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-340831: (2.195740795s)
--- PASS: TestKicCustomSubnet (36.44s)

                                                
                                    
x
+
TestKicStaticIP (37.37s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-852714 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-852714 --static-ip=192.168.200.200: (34.939106781s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-852714 ip
helpers_test.go:175: Cleaning up "static-ip-852714" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-852714
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-852714: (2.262572427s)
--- PASS: TestKicStaticIP (37.37s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (75.33s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-952729 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-952729 --driver=docker  --container-runtime=containerd: (34.437342642s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-955141 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-955141 --driver=docker  --container-runtime=containerd: (35.178941875s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-952729
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-955141
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-955141" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-955141
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-955141: (2.178815143s)
helpers_test.go:175: Cleaning up "first-952729" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-952729
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-952729: (2.023397712s)
--- PASS: TestMinikubeProfile (75.33s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.22s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-291433 --memory=3072 --mount-string /tmp/TestMountStartserial3698124083/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-291433 --memory=3072 --mount-string /tmp/TestMountStartserial3698124083/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.220969863s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-291433 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.76s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-293591 --memory=3072 --mount-string /tmp/TestMountStartserial3698124083/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-293591 --memory=3072 --mount-string /tmp/TestMountStartserial3698124083/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.759625115s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-293591 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-291433 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-291433 --alsologtostderr -v=5: (1.715054417s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-293591 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-293591
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-293591: (1.286092041s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.91s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-293591
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-293591: (6.910414575s)
--- PASS: TestMountStart/serial/RestartStopped (7.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-293591 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (76.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-655254 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1121 14:24:30.354588 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/addons-891209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-655254 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m16.380370673s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (76.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-655254 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-655254 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-655254 -- rollout status deployment/busybox: (3.118350738s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-655254 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-655254 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-655254 -- exec busybox-7b57f96db7-fnzr8 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-655254 -- exec busybox-7b57f96db7-gzqdf -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-655254 -- exec busybox-7b57f96db7-fnzr8 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-655254 -- exec busybox-7b57f96db7-gzqdf -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-655254 -- exec busybox-7b57f96db7-fnzr8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-655254 -- exec busybox-7b57f96db7-gzqdf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.12s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-655254 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-655254 -- exec busybox-7b57f96db7-fnzr8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-655254 -- exec busybox-7b57f96db7-fnzr8 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-655254 -- exec busybox-7b57f96db7-gzqdf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-655254 -- exec busybox-7b57f96db7-gzqdf -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.00s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-655254 -v=5 --alsologtostderr
E1121 14:26:07.266825 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/functional-907462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-655254 -v=5 --alsologtostderr: (57.458991098s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.30s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-655254 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 cp testdata/cp-test.txt multinode-655254:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 ssh -n multinode-655254 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 cp multinode-655254:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1765346576/001/cp-test_multinode-655254.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 ssh -n multinode-655254 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 cp multinode-655254:/home/docker/cp-test.txt multinode-655254-m02:/home/docker/cp-test_multinode-655254_multinode-655254-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 ssh -n multinode-655254 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 ssh -n multinode-655254-m02 "sudo cat /home/docker/cp-test_multinode-655254_multinode-655254-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 cp multinode-655254:/home/docker/cp-test.txt multinode-655254-m03:/home/docker/cp-test_multinode-655254_multinode-655254-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 ssh -n multinode-655254 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 ssh -n multinode-655254-m03 "sudo cat /home/docker/cp-test_multinode-655254_multinode-655254-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 cp testdata/cp-test.txt multinode-655254-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 ssh -n multinode-655254-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 cp multinode-655254-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1765346576/001/cp-test_multinode-655254-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 ssh -n multinode-655254-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 cp multinode-655254-m02:/home/docker/cp-test.txt multinode-655254:/home/docker/cp-test_multinode-655254-m02_multinode-655254.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 ssh -n multinode-655254-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 ssh -n multinode-655254 "sudo cat /home/docker/cp-test_multinode-655254-m02_multinode-655254.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 cp multinode-655254-m02:/home/docker/cp-test.txt multinode-655254-m03:/home/docker/cp-test_multinode-655254-m02_multinode-655254-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 ssh -n multinode-655254-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 ssh -n multinode-655254-m03 "sudo cat /home/docker/cp-test_multinode-655254-m02_multinode-655254-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 cp testdata/cp-test.txt multinode-655254-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 ssh -n multinode-655254-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 cp multinode-655254-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1765346576/001/cp-test_multinode-655254-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 ssh -n multinode-655254-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 cp multinode-655254-m03:/home/docker/cp-test.txt multinode-655254:/home/docker/cp-test_multinode-655254-m03_multinode-655254.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 ssh -n multinode-655254-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 ssh -n multinode-655254 "sudo cat /home/docker/cp-test_multinode-655254-m03_multinode-655254.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 cp multinode-655254-m03:/home/docker/cp-test.txt multinode-655254-m02:/home/docker/cp-test_multinode-655254-m03_multinode-655254-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 ssh -n multinode-655254-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 ssh -n multinode-655254-m02 "sudo cat /home/docker/cp-test_multinode-655254-m03_multinode-655254-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-655254 node stop m03: (1.32910824s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-655254 status: exit status 7 (520.735823ms)

                                                
                                                
-- stdout --
	multinode-655254
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-655254-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-655254-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-655254 status --alsologtostderr: exit status 7 (542.086676ms)

                                                
                                                
-- stdout --
	multinode-655254
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-655254-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-655254-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:26:58.658687 2757422 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:26:58.658967 2757422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:26:58.659215 2757422 out.go:374] Setting ErrFile to fd 2...
	I1121 14:26:58.659641 2757422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:26:58.659969 2757422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-2633933/.minikube/bin
	I1121 14:26:58.660214 2757422 out.go:368] Setting JSON to false
	I1121 14:26:58.660262 2757422 mustload.go:66] Loading cluster: multinode-655254
	I1121 14:26:58.660357 2757422 notify.go:221] Checking for updates...
	I1121 14:26:58.660747 2757422 config.go:182] Loaded profile config "multinode-655254": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:26:58.660768 2757422 status.go:174] checking status of multinode-655254 ...
	I1121 14:26:58.661405 2757422 cli_runner.go:164] Run: docker container inspect multinode-655254 --format={{.State.Status}}
	I1121 14:26:58.685144 2757422 status.go:371] multinode-655254 host status = "Running" (err=<nil>)
	I1121 14:26:58.685177 2757422 host.go:66] Checking if "multinode-655254" exists ...
	I1121 14:26:58.685549 2757422 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-655254
	I1121 14:26:58.717377 2757422 host.go:66] Checking if "multinode-655254" exists ...
	I1121 14:26:58.717687 2757422 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:26:58.717732 2757422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-655254
	I1121 14:26:58.735212 2757422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36575 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/multinode-655254/id_rsa Username:docker}
	I1121 14:26:58.834646 2757422 ssh_runner.go:195] Run: systemctl --version
	I1121 14:26:58.841107 2757422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:26:58.853963 2757422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:26:58.912488 2757422 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-21 14:26:58.902918383 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:26:58.913243 2757422 kubeconfig.go:125] found "multinode-655254" server: "https://192.168.67.2:8443"
	I1121 14:26:58.913278 2757422 api_server.go:166] Checking apiserver status ...
	I1121 14:26:58.913323 2757422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:26:58.926024 2757422 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1364/cgroup
	I1121 14:26:58.934697 2757422 api_server.go:182] apiserver freezer: "7:freezer:/docker/c00ca3507e580ecc9dcd83973dd3b2297bc29faa2161f4eb741b069715b921e0/kubepods/burstable/podd5e4fa6275718e5d399f3d39524cef53/914bb13061a6651987b36ec84d9613a7711f89df2b6334f04c599cc293010a1d"
	I1121 14:26:58.934777 2757422 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c00ca3507e580ecc9dcd83973dd3b2297bc29faa2161f4eb741b069715b921e0/kubepods/burstable/podd5e4fa6275718e5d399f3d39524cef53/914bb13061a6651987b36ec84d9613a7711f89df2b6334f04c599cc293010a1d/freezer.state
	I1121 14:26:58.942776 2757422 api_server.go:204] freezer state: "THAWED"
	I1121 14:26:58.942805 2757422 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1121 14:26:58.951074 2757422 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1121 14:26:58.951112 2757422 status.go:463] multinode-655254 apiserver status = Running (err=<nil>)
	I1121 14:26:58.951140 2757422 status.go:176] multinode-655254 status: &{Name:multinode-655254 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:26:58.951164 2757422 status.go:174] checking status of multinode-655254-m02 ...
	I1121 14:26:58.951462 2757422 cli_runner.go:164] Run: docker container inspect multinode-655254-m02 --format={{.State.Status}}
	I1121 14:26:58.970937 2757422 status.go:371] multinode-655254-m02 host status = "Running" (err=<nil>)
	I1121 14:26:58.970965 2757422 host.go:66] Checking if "multinode-655254-m02" exists ...
	I1121 14:26:58.971279 2757422 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-655254-m02
	I1121 14:26:58.988677 2757422 host.go:66] Checking if "multinode-655254-m02" exists ...
	I1121 14:26:58.988990 2757422 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:26:58.989104 2757422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-655254-m02
	I1121 14:26:59.010162 2757422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36580 SSHKeyPath:/home/jenkins/minikube-integration/21847-2633933/.minikube/machines/multinode-655254-m02/id_rsa Username:docker}
	I1121 14:26:59.110233 2757422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:26:59.123709 2757422 status.go:176] multinode-655254-m02 status: &{Name:multinode-655254-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:26:59.123744 2757422 status.go:174] checking status of multinode-655254-m03 ...
	I1121 14:26:59.124059 2757422 cli_runner.go:164] Run: docker container inspect multinode-655254-m03 --format={{.State.Status}}
	I1121 14:26:59.141810 2757422 status.go:371] multinode-655254-m03 host status = "Stopped" (err=<nil>)
	I1121 14:26:59.141836 2757422 status.go:384] host is not running, skipping remaining checks
	I1121 14:26:59.141843 2757422 status.go:176] multinode-655254-m03 status: &{Name:multinode-655254-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.39s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-655254 node start m03 -v=5 --alsologtostderr: (7.337364042s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (81.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-655254
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-655254
E1121 14:27:30.338048 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/functional-907462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-655254: (25.180941837s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-655254 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-655254 --wait=true -v=5 --alsologtostderr: (56.415547103s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-655254
--- PASS: TestMultiNode/serial/RestartKeepsNodes (81.74s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-655254 node delete m03: (5.067731617s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-655254 stop: (23.890002567s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-655254 status: exit status 7 (86.846094ms)

                                                
                                                
-- stdout --
	multinode-655254
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-655254-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-655254 status --alsologtostderr: exit status 7 (99.986133ms)

                                                
                                                
-- stdout --
	multinode-655254
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-655254-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:28:58.798726 2766164 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:28:58.798887 2766164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:28:58.798919 2766164 out.go:374] Setting ErrFile to fd 2...
	I1121 14:28:58.798937 2766164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:28:58.799206 2766164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-2633933/.minikube/bin
	I1121 14:28:58.799423 2766164 out.go:368] Setting JSON to false
	I1121 14:28:58.799483 2766164 mustload.go:66] Loading cluster: multinode-655254
	I1121 14:28:58.799933 2766164 config.go:182] Loaded profile config "multinode-655254": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:28:58.799992 2766164 status.go:174] checking status of multinode-655254 ...
	I1121 14:28:58.799522 2766164 notify.go:221] Checking for updates...
	I1121 14:28:58.801208 2766164 cli_runner.go:164] Run: docker container inspect multinode-655254 --format={{.State.Status}}
	I1121 14:28:58.819861 2766164 status.go:371] multinode-655254 host status = "Stopped" (err=<nil>)
	I1121 14:28:58.819883 2766164 status.go:384] host is not running, skipping remaining checks
	I1121 14:28:58.819890 2766164 status.go:176] multinode-655254 status: &{Name:multinode-655254 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:28:58.819920 2766164 status.go:174] checking status of multinode-655254-m02 ...
	I1121 14:28:58.820231 2766164 cli_runner.go:164] Run: docker container inspect multinode-655254-m02 --format={{.State.Status}}
	I1121 14:28:58.849407 2766164 status.go:371] multinode-655254-m02 host status = "Stopped" (err=<nil>)
	I1121 14:28:58.849449 2766164 status.go:384] host is not running, skipping remaining checks
	I1121 14:28:58.849457 2766164 status.go:176] multinode-655254-m02 status: &{Name:multinode-655254-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-655254 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1121 14:29:30.355133 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/addons-891209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-655254 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (51.268866615s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-655254 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.96s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (38.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-655254
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-655254-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-655254-m02 --driver=docker  --container-runtime=containerd: exit status 14 (93.378523ms)

                                                
                                                
-- stdout --
	* [multinode-655254-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21847-2633933/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-2633933/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-655254-m02' is duplicated with machine name 'multinode-655254-m02' in profile 'multinode-655254'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-655254-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-655254-m03 --driver=docker  --container-runtime=containerd: (36.206045978s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-655254
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-655254: exit status 80 (359.731788ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-655254 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-655254-m03 already exists in multinode-655254-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-655254-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-655254-m03: (2.114192484s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (38.83s)

                                                
                                    
x
+
TestPreload (132.89s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-316554 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
E1121 14:31:07.266869 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/functional-907462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-316554 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (1m1.37191342s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-316554 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-316554 image pull gcr.io/k8s-minikube/busybox: (2.342075458s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-316554
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-316554: (5.89794344s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-316554 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-316554 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (1m0.548327908s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-316554 image list
helpers_test.go:175: Cleaning up "test-preload-316554" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-316554
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-316554: (2.496619899s)
--- PASS: TestPreload (132.89s)

                                                
                                    
x
+
TestScheduledStopUnix (109.21s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-994288 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-994288 --memory=3072 --driver=docker  --container-runtime=containerd: (33.403211667s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-994288 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1121 14:33:20.265604 2782079 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:33:20.265774 2782079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:33:20.265787 2782079 out.go:374] Setting ErrFile to fd 2...
	I1121 14:33:20.265792 2782079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:33:20.266055 2782079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-2633933/.minikube/bin
	I1121 14:33:20.266328 2782079 out.go:368] Setting JSON to false
	I1121 14:33:20.266439 2782079 mustload.go:66] Loading cluster: scheduled-stop-994288
	I1121 14:33:20.266886 2782079 config.go:182] Loaded profile config "scheduled-stop-994288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:33:20.266969 2782079 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/scheduled-stop-994288/config.json ...
	I1121 14:33:20.267159 2782079 mustload.go:66] Loading cluster: scheduled-stop-994288
	I1121 14:33:20.267282 2782079 config.go:182] Loaded profile config "scheduled-stop-994288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-994288 -n scheduled-stop-994288
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-994288 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1121 14:33:20.711598 2782167 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:33:20.711808 2782167 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:33:20.711836 2782167 out.go:374] Setting ErrFile to fd 2...
	I1121 14:33:20.711855 2782167 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:33:20.712148 2782167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-2633933/.minikube/bin
	I1121 14:33:20.712462 2782167 out.go:368] Setting JSON to false
	I1121 14:33:20.712768 2782167 daemonize_unix.go:73] killing process 2782095 as it is an old scheduled stop
	I1121 14:33:20.717051 2782167 mustload.go:66] Loading cluster: scheduled-stop-994288
	I1121 14:33:20.717517 2782167 config.go:182] Loaded profile config "scheduled-stop-994288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:33:20.717645 2782167 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/scheduled-stop-994288/config.json ...
	I1121 14:33:20.717863 2782167 mustload.go:66] Loading cluster: scheduled-stop-994288
	I1121 14:33:20.718027 2782167 config.go:182] Loaded profile config "scheduled-stop-994288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1121 14:33:20.722206 2635785 retry.go:31] will retry after 57.5µs: open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/scheduled-stop-994288/pid: no such file or directory
I1121 14:33:20.723365 2635785 retry.go:31] will retry after 149.33µs: open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/scheduled-stop-994288/pid: no such file or directory
I1121 14:33:20.724450 2635785 retry.go:31] will retry after 327.782µs: open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/scheduled-stop-994288/pid: no such file or directory
I1121 14:33:20.725585 2635785 retry.go:31] will retry after 367.095µs: open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/scheduled-stop-994288/pid: no such file or directory
I1121 14:33:20.726721 2635785 retry.go:31] will retry after 440.801µs: open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/scheduled-stop-994288/pid: no such file or directory
I1121 14:33:20.727858 2635785 retry.go:31] will retry after 932.546µs: open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/scheduled-stop-994288/pid: no such file or directory
I1121 14:33:20.728973 2635785 retry.go:31] will retry after 772.782µs: open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/scheduled-stop-994288/pid: no such file or directory
I1121 14:33:20.730091 2635785 retry.go:31] will retry after 1.861654ms: open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/scheduled-stop-994288/pid: no such file or directory
I1121 14:33:20.732273 2635785 retry.go:31] will retry after 1.745231ms: open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/scheduled-stop-994288/pid: no such file or directory
I1121 14:33:20.734460 2635785 retry.go:31] will retry after 4.811076ms: open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/scheduled-stop-994288/pid: no such file or directory
I1121 14:33:20.739665 2635785 retry.go:31] will retry after 3.697849ms: open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/scheduled-stop-994288/pid: no such file or directory
I1121 14:33:20.743880 2635785 retry.go:31] will retry after 11.351015ms: open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/scheduled-stop-994288/pid: no such file or directory
I1121 14:33:20.756182 2635785 retry.go:31] will retry after 11.638312ms: open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/scheduled-stop-994288/pid: no such file or directory
I1121 14:33:20.768617 2635785 retry.go:31] will retry after 14.456425ms: open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/scheduled-stop-994288/pid: no such file or directory
I1121 14:33:20.783877 2635785 retry.go:31] will retry after 38.506125ms: open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/scheduled-stop-994288/pid: no such file or directory
I1121 14:33:20.823282 2635785 retry.go:31] will retry after 32.458017ms: open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/scheduled-stop-994288/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-994288 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-994288 -n scheduled-stop-994288
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-994288
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-994288 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1121 14:33:46.675628 2782651 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:33:46.675857 2782651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:33:46.675888 2782651 out.go:374] Setting ErrFile to fd 2...
	I1121 14:33:46.675911 2782651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:33:46.676172 2782651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-2633933/.minikube/bin
	I1121 14:33:46.676470 2782651 out.go:368] Setting JSON to false
	I1121 14:33:46.676602 2782651 mustload.go:66] Loading cluster: scheduled-stop-994288
	I1121 14:33:46.676970 2782651 config.go:182] Loaded profile config "scheduled-stop-994288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:33:46.677107 2782651 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/scheduled-stop-994288/config.json ...
	I1121 14:33:46.677323 2782651 mustload.go:66] Loading cluster: scheduled-stop-994288
	I1121 14:33:46.677471 2782651 config.go:182] Loaded profile config "scheduled-stop-994288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1121 14:34:30.355007 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/addons-891209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-994288
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-994288: exit status 7 (67.618001ms)

                                                
                                                
-- stdout --
	scheduled-stop-994288
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-994288 -n scheduled-stop-994288
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-994288 -n scheduled-stop-994288: exit status 7 (69.210865ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-994288" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-994288
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-994288: (4.199943241s)
--- PASS: TestScheduledStopUnix (109.21s)

                                                
                                    
x
+
TestInsufficientStorage (13.4s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-866991 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-866991 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.861451849s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d744e3ad-b7fb-434b-9993-6899de376673","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-866991] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"00a65019-cd74-4e20-a8fb-de3080d46b7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21847"}}
	{"specversion":"1.0","id":"d24766ac-db7a-443b-9136-1494bb49ceb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f1828322-214b-4f48-8dbf-c29a1af08ffd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21847-2633933/kubeconfig"}}
	{"specversion":"1.0","id":"6c9c30c6-590d-4734-ba3b-1ef5aa17801a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-2633933/.minikube"}}
	{"specversion":"1.0","id":"567bb497-a849-45dd-b641-b51aac50fe9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"704a9ff9-7aa3-469d-a9c2-2665feed8080","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ca23a06b-ba0c-4cbc-9d87-4e0fc3588695","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"97a2e1d7-afc7-405c-898d-78492b4532d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"4dbfac13-77b2-45b1-b1ed-30d3aecf5379","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"50d38161-8e76-4daf-a303-6883eef3246d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"3742c635-eaa4-4580-bc3e-0d91d94154f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-866991\" primary control-plane node in \"insufficient-storage-866991\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"71612c9b-f4a5-4f5f-a25a-b2b862878023","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763507788-21924 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"51eddc6b-a489-48ed-a9a0-612ae590a618","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"08e51925-2a5d-4b1f-9fd4-22121fc94598","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-866991 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-866991 --output=json --layout=cluster: exit status 7 (291.909606ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-866991","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-866991","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1121 14:34:47.158221 2784266 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-866991" does not appear in /home/jenkins/minikube-integration/21847-2633933/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-866991 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-866991 --output=json --layout=cluster: exit status 7 (289.38767ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-866991","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-866991","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1121 14:34:47.446621 2784331 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-866991" does not appear in /home/jenkins/minikube-integration/21847-2633933/kubeconfig
	E1121 14:34:47.456363 2784331 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/insufficient-storage-866991/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-866991" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-866991
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-866991: (1.958451954s)
--- PASS: TestInsufficientStorage (13.40s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (67.25s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.723353281 start -p running-upgrade-214735 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.723353281 start -p running-upgrade-214735 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (32.153067959s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-214735 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1121 14:39:30.354965 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/addons-891209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-214735 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (24.795345645s)
helpers_test.go:175: Cleaning up "running-upgrade-214735" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-214735
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-214735: (2.101059604s)
--- PASS: TestRunningBinaryUpgrade (67.25s)

                                                
                                    
x
+
TestKubernetesUpgrade (351.27s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-012139 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-012139 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (34.973591954s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-012139
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-012139: (1.362758827s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-012139 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-012139 status --format={{.Host}}: exit status 7 (78.210242ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-012139 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-012139 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m52.468627461s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-012139 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-012139 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-012139 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (130.705886ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-012139] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21847-2633933/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-2633933/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-012139
	    minikube start -p kubernetes-upgrade-012139 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0121392 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-012139 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-012139 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-012139 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (19.47290981s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-012139" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-012139
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-012139: (2.636077066s)
--- PASS: TestKubernetesUpgrade (351.27s)

                                                
                                    
x
+
TestMissingContainerUpgrade (150.85s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1898309379 start -p missing-upgrade-904272 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1898309379 start -p missing-upgrade-904272 --memory=3072 --driver=docker  --container-runtime=containerd: (1m0.16164029s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-904272
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-904272
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-904272 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-904272 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m26.95904937s)
helpers_test.go:175: Cleaning up "missing-upgrade-904272" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-904272
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-904272: (2.140792655s)
--- PASS: TestMissingContainerUpgrade (150.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-131949 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-131949 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (93.234863ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-131949] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21847-2633933/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-2633933/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (47.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-131949 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-131949 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (46.604942995s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-131949 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (47.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (24.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-131949 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-131949 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (21.672737419s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-131949 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-131949 status -o json: exit status 2 (301.505708ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-131949","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-131949
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-131949: (2.279666124s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (24.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-131949 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1121 14:36:07.266338 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/functional-907462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-131949 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (8.36829129s)
--- PASS: TestNoKubernetes/serial/Start (8.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21847-2633933/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-131949 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-131949 "sudo systemctl is-active --quiet service kubelet": exit status 1 (283.454632ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-131949
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-131949: (1.288305744s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-131949 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-131949 --driver=docker  --container-runtime=containerd: (7.834690116s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-131949 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-131949 "sudo systemctl is-active --quiet service kubelet": exit status 1 (478.720105ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (8.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (8.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (58.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2047534042 start -p stopped-upgrade-992304 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1121 14:37:33.422122 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/addons-891209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2047534042 start -p stopped-upgrade-992304 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (32.670759279s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2047534042 -p stopped-upgrade-992304 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2047534042 -p stopped-upgrade-992304 stop: (1.232588621s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-992304 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-992304 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (24.923508009s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (58.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-992304
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-992304: (1.670848711s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.67s)

                                                
                                    
x
+
TestPause/serial/Start (84.89s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-539588 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-539588 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m24.891724143s)
--- PASS: TestPause/serial/Start (84.89s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.19s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-539588 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1121 14:41:07.266952 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/functional-907462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-539588 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.180997686s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.19s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-539588 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-539588 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-539588 --output=json --layout=cluster: exit status 2 (441.03198ms)

                                                
                                                
-- stdout --
	{"Name":"pause-539588","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-539588","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.44s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.81s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-539588 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.81s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.78s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-539588 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.78s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.53s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-539588 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-539588 --alsologtostderr -v=5: (2.526823313s)
--- PASS: TestPause/serial/DeletePaused (2.53s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-539588
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-539588: exit status 1 (19.643905ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-539588: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-650772 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-650772 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (260.620274ms)

                                                
                                                
-- stdout --
	* [false-650772] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21847-2633933/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-2633933/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:42:07.303979 2825507 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:42:07.304527 2825507 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:42:07.304564 2825507 out.go:374] Setting ErrFile to fd 2...
	I1121 14:42:07.304584 2825507 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:42:07.304885 2825507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-2633933/.minikube/bin
	I1121 14:42:07.305465 2825507 out.go:368] Setting JSON to false
	I1121 14:42:07.306899 2825507 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":69876,"bootTime":1763666252,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1121 14:42:07.307117 2825507 start.go:143] virtualization:  
	I1121 14:42:07.316435 2825507 out.go:179] * [false-650772] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 14:42:07.319933 2825507 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:42:07.320022 2825507 notify.go:221] Checking for updates...
	I1121 14:42:07.324579 2825507 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:42:07.327915 2825507 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-2633933/kubeconfig
	I1121 14:42:07.331624 2825507 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-2633933/.minikube
	I1121 14:42:07.335311 2825507 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 14:42:07.338345 2825507 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:42:07.341862 2825507 config.go:182] Loaded profile config "kubernetes-upgrade-012139": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 14:42:07.342040 2825507 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:42:07.386275 2825507 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 14:42:07.386473 2825507 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:42:07.469842 2825507 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:46 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-21 14:42:07.460220229 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 14:42:07.469940 2825507 docker.go:319] overlay module found
	I1121 14:42:07.473418 2825507 out.go:179] * Using the docker driver based on user configuration
	I1121 14:42:07.476440 2825507 start.go:309] selected driver: docker
	I1121 14:42:07.476459 2825507 start.go:930] validating driver "docker" against <nil>
	I1121 14:42:07.476473 2825507 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:42:07.479972 2825507 out.go:203] 
	W1121 14:42:07.482875 2825507 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1121 14:42:07.486006 2825507 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-650772 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-650772

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-650772

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-650772

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-650772

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-650772

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-650772

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-650772

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-650772

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-650772

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-650772

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-650772

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-650772" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-650772" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 21 Nov 2025 14:42:08 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-012139
contexts:
- context:
cluster: kubernetes-upgrade-012139
extensions:
- extension:
last-update: Fri, 21 Nov 2025 14:42:08 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-012139
name: kubernetes-upgrade-012139
current-context: kubernetes-upgrade-012139
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-012139
user:
client-certificate: /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/kubernetes-upgrade-012139/client.crt
client-key: /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/kubernetes-upgrade-012139/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-650772

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-650772"

                                                
                                                
----------------------- debugLogs end: false-650772 [took: 4.96865376s] --------------------------------
helpers_test.go:175: Cleaning up "false-650772" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-650772
--- PASS: TestNetworkPlugins/group/false (5.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (60.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-092258 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1121 14:44:10.339890 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/functional-907462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:44:30.354329 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/addons-891209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-092258 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m0.51806247s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (60.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-092258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-092258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.059391355s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-092258 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-092258 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-092258 --alsologtostderr -v=3: (12.117795141s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-092258 -n old-k8s-version-092258
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-092258 -n old-k8s-version-092258: exit status 7 (79.873085ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-092258 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-092258 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-092258 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (51.233298417s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-092258 -n old-k8s-version-092258
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-svsvs" [b1b0c55c-ec33-4b77-b41a-7c0c82c1c4b5] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003611182s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-svsvs" [b1b0c55c-ec33-4b77-b41a-7c0c82c1c4b5] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004356743s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-092258 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-092258 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-092258 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-092258 -n old-k8s-version-092258
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-092258 -n old-k8s-version-092258: exit status 2 (565.526942ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-092258 -n old-k8s-version-092258
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-092258 -n old-k8s-version-092258: exit status 2 (476.107771ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-092258 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-092258 --alsologtostderr -v=1: (1.21121732s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-092258 -n old-k8s-version-092258
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-092258 -n old-k8s-version-092258: (1.421206044s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-092258 -n old-k8s-version-092258
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (5.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (71.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-208006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-208006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m11.228081344s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (71.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (92.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-695324 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-695324 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m32.004975004s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (92.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-208006 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-208006 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.003773935s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-208006 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-208006 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-208006 --alsologtostderr -v=3: (12.169529865s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-208006 -n no-preload-208006
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-208006 -n no-preload-208006: exit status 7 (72.628156ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-208006 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (62.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-208006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-208006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m2.236004856s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-208006 -n no-preload-208006
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (62.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-695324 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-695324 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.314048475s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-695324 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-695324 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-695324 --alsologtostderr -v=3: (12.953262941s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-695324 -n embed-certs-695324
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-695324 -n embed-certs-695324: exit status 7 (138.303164ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-695324 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-695324 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-695324 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (51.057265178s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-695324 -n embed-certs-695324
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qbdbs" [d8b97fea-1888-4ab0-94ec-1dfa530705e4] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003333741s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qbdbs" [d8b97fea-1888-4ab0-94ec-1dfa530705e4] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003102699s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-208006 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-208006 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-208006 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-208006 -n no-preload-208006
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-208006 -n no-preload-208006: exit status 2 (387.762269ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-208006 -n no-preload-208006
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-208006 -n no-preload-208006: exit status 2 (346.631778ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-208006 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-208006 -n no-preload-208006
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-208006 -n no-preload-208006
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-219338 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-219338 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m26.367510772s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-t4ffh" [1033dd66-253a-4fad-96fa-e2333ee2e916] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005848772s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-t4ffh" [1033dd66-253a-4fad-96fa-e2333ee2e916] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005080166s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-695324 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-695324 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-695324 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-695324 --alsologtostderr -v=1: (1.001170359s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-695324 -n embed-certs-695324
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-695324 -n embed-certs-695324: exit status 2 (391.815572ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-695324 -n embed-certs-695324
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-695324 -n embed-certs-695324: exit status 2 (458.703599ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-695324 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-695324 -n embed-certs-695324
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-695324 -n embed-certs-695324
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-921069 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1121 14:49:30.354621 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/addons-891209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:49:37.067839 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:49:37.074144 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:49:37.085450 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:49:37.106761 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:49:37.148088 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:49:37.229461 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:49:37.391212 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:49:37.712820 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:49:38.354205 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:49:39.636459 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:49:42.198292 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:49:47.320560 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:49:57.562174 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-921069 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (40.135938164s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-921069 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-921069 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.263124716s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-921069 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-921069 --alsologtostderr -v=3: (1.358545704s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-921069 -n newest-cni-921069
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-921069 -n newest-cni-921069: exit status 7 (90.986424ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-921069 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-921069 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1121 14:50:18.043604 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-921069 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (17.209617949s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-921069 -n newest-cni-921069
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-921069 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-921069 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-921069 -n newest-cni-921069
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-921069 -n newest-cni-921069: exit status 2 (352.235459ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-921069 -n newest-cni-921069
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-921069 -n newest-cni-921069: exit status 2 (336.142056ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-921069 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-921069 -n newest-cni-921069
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-921069 -n newest-cni-921069
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (84.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-650772 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-650772 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m24.992727092s)
--- PASS: TestNetworkPlugins/group/auto/Start (84.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-219338 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-219338 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.294274901s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-219338 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-219338 --alsologtostderr -v=3
E1121 14:50:59.005820 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-219338 --alsologtostderr -v=3: (12.428263151s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-219338 -n default-k8s-diff-port-219338
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-219338 -n default-k8s-diff-port-219338: exit status 7 (138.406912ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-219338 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-219338 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1121 14:51:07.267144 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/functional-907462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-219338 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (52.378517998s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-219338 -n default-k8s-diff-port-219338
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rmd4t" [52628189-acf7-4990-9b3b-92f7fea99793] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003143181s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-650772 "pgrep -a kubelet"
I1121 14:52:00.650697 2635785 config.go:182] Loaded profile config "auto-650772": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-650772 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rz2z6" [a5972514-4212-458d-a92e-8d9faa359ecc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rz2z6" [a5972514-4212-458d-a92e-8d9faa359ecc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004406693s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rmd4t" [52628189-acf7-4990-9b3b-92f7fea99793] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003042369s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-219338 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-219338 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-219338 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-219338 -n default-k8s-diff-port-219338
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-219338 -n default-k8s-diff-port-219338: exit status 2 (335.956054ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-219338 -n default-k8s-diff-port-219338
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-219338 -n default-k8s-diff-port-219338: exit status 2 (329.271801ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-219338 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-219338 -n default-k8s-diff-port-219338
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-219338 -n default-k8s-diff-port-219338
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.42s)
E1121 14:57:41.886373 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/auto-650772/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:57:48.506653 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-650772 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-650772 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-650772 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (82.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-650772 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E1121 14:52:20.795055 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:52:20.802491 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:52:20.813962 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:52:20.835794 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:52:20.877467 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:52:20.927219 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:52:20.958848 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:52:21.120399 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:52:21.442287 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:52:22.084187 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:52:23.371163 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:52:25.934302 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:52:31.057007 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-650772 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m22.978058959s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (82.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (63.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-650772 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E1121 14:52:41.301265 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:53:01.782680 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-650772 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m3.586426863s)
--- PASS: TestNetworkPlugins/group/calico/Start (63.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-kzjcx" [f8f5e6c5-afcf-4359-86de-852e045340ee] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003830827s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-f67gv" [10e1f236-22a2-4a4f-99da-d8ccacec0cbb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-f67gv" [10e1f236-22a2-4a4f-99da-d8ccacec0cbb] Running
E1121 14:53:42.744010 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003894084s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-650772 "pgrep -a kubelet"
I1121 14:53:45.111061 2635785 config.go:182] Loaded profile config "kindnet-650772": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-650772 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pq24s" [f4500e6a-21c5-4dfe-b2a5-9a39ed0d3c74] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pq24s" [f4500e6a-21c5-4dfe-b2a5-9a39ed0d3c74] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004654094s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-650772 "pgrep -a kubelet"
I1121 14:53:47.215149 2635785 config.go:182] Loaded profile config "calico-650772": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-650772 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2pqr9" [5abed720-1fe6-41b6-926c-baf522a9d707] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2pqr9" [5abed720-1fe6-41b6-926c-baf522a9d707] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003721395s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-650772 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-650772 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-650772 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-650772 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-650772 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-650772 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-650772 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-650772 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m4.067114772s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (53.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-650772 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1121 14:54:30.357863 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/addons-891209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:54:37.068178 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:55:04.665381 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/no-preload-208006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:55:04.770267 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/old-k8s-version-092258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-650772 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (53.899560303s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (53.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-650772 "pgrep -a kubelet"
I1121 14:55:19.835800 2635785 config.go:182] Loaded profile config "enable-default-cni-650772": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-650772 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xtp5z" [eafefadf-d11b-46f3-b005-c2d54520007e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xtp5z" [eafefadf-d11b-46f3-b005-c2d54520007e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.007017041s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-650772 "pgrep -a kubelet"
I1121 14:55:27.676482 2635785 config.go:182] Loaded profile config "custom-flannel-650772": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-650772 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-np7fj" [40633187-73c8-4bec-8960-39484e4116e9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-np7fj" [40633187-73c8-4bec-8960-39484e4116e9] Running
E1121 14:55:36.132038 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/default-k8s-diff-port-219338/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:55:36.138542 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/default-k8s-diff-port-219338/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:55:36.149981 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/default-k8s-diff-port-219338/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:55:36.171439 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/default-k8s-diff-port-219338/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:55:36.213254 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/default-k8s-diff-port-219338/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:55:36.294983 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/default-k8s-diff-port-219338/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:55:36.456452 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/default-k8s-diff-port-219338/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:55:36.778596 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/default-k8s-diff-port-219338/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.00434319s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-650772 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-650772 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-650772 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-650772 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-650772 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-650772 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1121 14:55:37.420809 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/default-k8s-diff-port-219338/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-650772 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1121 14:55:56.628632 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/default-k8s-diff-port-219338/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-650772 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m3.884369935s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (80.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-650772 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1121 14:56:07.266685 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/functional-907462/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:56:17.110936 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/default-k8s-diff-port-219338/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-650772 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m20.728048668s)
--- PASS: TestNetworkPlugins/group/bridge/Start (80.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-s7vxk" [13794f38-6a3b-4034-8b27-b1d28c5ea10c] Running
E1121 14:56:58.072222 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/default-k8s-diff-port-219338/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:57:00.910853 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/auto-650772/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:57:00.917244 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/auto-650772/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:57:00.928616 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/auto-650772/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:57:00.950084 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/auto-650772/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:57:00.991843 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/auto-650772/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:57:01.073665 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/auto-650772/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:57:01.235204 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/auto-650772/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:57:01.556934 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/auto-650772/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:57:02.198653 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/auto-650772/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00385884s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-650772 "pgrep -a kubelet"
I1121 14:57:03.259042 2635785 config.go:182] Loaded profile config "flannel-650772": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-650772 replace --force -f testdata/netcat-deployment.yaml
E1121 14:57:03.480547 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/auto-650772/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ph5jd" [b16ea5ee-081e-466d-b03e-de7f942e1977] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1121 14:57:06.042051 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/auto-650772/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-ph5jd" [b16ea5ee-081e-466d-b03e-de7f942e1977] Running
E1121 14:57:11.163334 2635785 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/auto-650772/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003407063s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-650772 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-650772 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-650772 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-650772 "pgrep -a kubelet"
I1121 14:57:25.062872 2635785 config.go:182] Loaded profile config "bridge-650772": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-650772 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vjn8z" [86be68d3-99ae-4c33-9bb4-cecdcc8f8192] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vjn8z" [86be68d3-99ae-4c33-9bb4-cecdcc8f8192] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004720222s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-650772 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-650772 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-650772 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.25s)

                                                
                                    

Test skip (30/333)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.48s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-174109 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-174109" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-174109
--- SKIP: TestDownloadOnlyKic (0.48s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-422442" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-422442
--- SKIP: TestStartStop/group/disable-driver-mounts (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-650772 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-650772

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-650772

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-650772

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-650772

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-650772

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-650772

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-650772

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-650772

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-650772

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-650772

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-650772

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-650772" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-650772" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21847-2633933/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 21 Nov 2025 14:41:49 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-012139
contexts:
- context:
cluster: kubernetes-upgrade-012139
extensions:
- extension:
last-update: Fri, 21 Nov 2025 14:41:49 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-012139
name: kubernetes-upgrade-012139
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-012139
user:
client-certificate: /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/kubernetes-upgrade-012139/client.crt
client-key: /home/jenkins/minikube-integration/21847-2633933/.minikube/profiles/kubernetes-upgrade-012139/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-650772

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-650772"

                                                
                                                
----------------------- debugLogs end: kubenet-650772 [took: 5.261120626s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-650772" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-650772
--- SKIP: TestNetworkPlugins/group/kubenet (5.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-650772 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-650772

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-650772

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-650772

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-650772

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-650772

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-650772

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-650772

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-650772

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-650772

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-650772

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-650772

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-650772" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-650772

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-650772

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-650772

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-650772

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-650772" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-650772" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-650772

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-650772" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-650772"

                                                
                                                
----------------------- debugLogs end: cilium-650772 [took: 5.539514824s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-650772" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-650772
--- SKIP: TestNetworkPlugins/group/cilium (5.76s)

                                                
                                    
Copied to clipboard