Test Report: Docker_Linux_containerd_arm64 21932

                    
                      84a896b9ca11c6987b6528b1f6e82b411b2540e2:2025-11-24:42492
                    
                

Test fail (4/333)

Order failed test Duration
301 TestStartStop/group/old-k8s-version/serial/DeployApp 13.81
314 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 13.22
315 TestStartStop/group/embed-certs/serial/DeployApp 14.71
341 TestStartStop/group/no-preload/serial/DeployApp 15.39
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (13.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-318786 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f566ecf0-6907-464c-9185-0f1cac06d38f] Pending
helpers_test.go:352: "busybox" [f566ecf0-6907-464c-9185-0f1cac06d38f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f566ecf0-6907-464c-9185-0f1cac06d38f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003400034s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-318786 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-318786
helpers_test.go:243: (dbg) docker inspect old-k8s-version-318786:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a1a9c211e03d84dc290244440868edd560e068d58cbff839724b36106b46b8b5",
	        "Created": "2025-11-24T13:59:48.707287298Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 203512,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T13:59:48.794762344Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/a1a9c211e03d84dc290244440868edd560e068d58cbff839724b36106b46b8b5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a1a9c211e03d84dc290244440868edd560e068d58cbff839724b36106b46b8b5/hostname",
	        "HostsPath": "/var/lib/docker/containers/a1a9c211e03d84dc290244440868edd560e068d58cbff839724b36106b46b8b5/hosts",
	        "LogPath": "/var/lib/docker/containers/a1a9c211e03d84dc290244440868edd560e068d58cbff839724b36106b46b8b5/a1a9c211e03d84dc290244440868edd560e068d58cbff839724b36106b46b8b5-json.log",
	        "Name": "/old-k8s-version-318786",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-318786:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-318786",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a1a9c211e03d84dc290244440868edd560e068d58cbff839724b36106b46b8b5",
	                "LowerDir": "/var/lib/docker/overlay2/6d81197f1905508bee65982ae14ace70a0ac50476483b3a6dbe6ee1b71c20126-init/diff:/var/lib/docker/overlay2/f206897dad0d7c6b66379aa7c75402ab98ba158a4fc5aedf84eda3d57da10430/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6d81197f1905508bee65982ae14ace70a0ac50476483b3a6dbe6ee1b71c20126/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6d81197f1905508bee65982ae14ace70a0ac50476483b3a6dbe6ee1b71c20126/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6d81197f1905508bee65982ae14ace70a0ac50476483b3a6dbe6ee1b71c20126/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-318786",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-318786/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-318786",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-318786",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-318786",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "afc5451f55c0addfee2faf75046d85ee1aff51cfb29d1330d1b700fc0f910363",
	            "SandboxKey": "/var/run/docker/netns/afc5451f55c0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-318786": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:35:e5:9c:e1:30",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3c8da78d6dab92e1227f095e0039dcc72885109237746924b800f0f7e07a64d9",
	                    "EndpointID": "c068219706ac0808a20d3010c587a2e59831507d8b6c4030ff3e4a62ce6b15dc",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-318786",
	                        "a1a9c211e03d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-318786 -n old-k8s-version-318786
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-318786 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-318786 logs -n 25: (1.183996865s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-803934 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                           │ cilium-803934             │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ ssh     │ -p cilium-803934 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ cilium-803934             │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ ssh     │ -p cilium-803934 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-803934             │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ ssh     │ -p cilium-803934 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-803934             │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ ssh     │ -p cilium-803934 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-803934             │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ ssh     │ -p cilium-803934 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-803934             │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ ssh     │ -p cilium-803934 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-803934             │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ ssh     │ -p cilium-803934 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-803934             │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ ssh     │ -p cilium-803934 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-803934             │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ ssh     │ -p cilium-803934 sudo containerd config dump                                                                                                                                                                                                        │ cilium-803934             │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ ssh     │ -p cilium-803934 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-803934             │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ ssh     │ -p cilium-803934 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-803934             │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ ssh     │ -p cilium-803934 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-803934             │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ ssh     │ -p cilium-803934 sudo crio config                                                                                                                                                                                                                   │ cilium-803934             │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ delete  │ -p cilium-803934                                                                                                                                                                                                                                    │ cilium-803934             │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p force-systemd-env-134839 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-134839  │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p kubernetes-upgrade-758885                                                                                                                                                                                                                        │ kubernetes-upgrade-758885 │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p cert-expiration-865605 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-865605    │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:59 UTC │
	│ ssh     │ force-systemd-env-134839 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-134839  │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p force-systemd-env-134839                                                                                                                                                                                                                         │ force-systemd-env-134839  │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ start   │ -p cert-options-440754 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-440754       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ ssh     │ cert-options-440754 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-440754       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ ssh     │ -p cert-options-440754 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-440754       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p cert-options-440754                                                                                                                                                                                                                              │ cert-options-440754       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ start   │ -p old-k8s-version-318786 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-318786    │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 14:00 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:59:42
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:59:42.406479  203121 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:59:42.406674  203121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:59:42.406701  203121 out.go:374] Setting ErrFile to fd 2...
	I1124 13:59:42.406722  203121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:59:42.407140  203121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2368/.minikube/bin
	I1124 13:59:42.407724  203121 out.go:368] Setting JSON to false
	I1124 13:59:42.409260  203121 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6131,"bootTime":1763986651,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1124 13:59:42.409372  203121 start.go:143] virtualization:  
	I1124 13:59:42.413282  203121 out.go:179] * [old-k8s-version-318786] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 13:59:42.417925  203121 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:59:42.418098  203121 notify.go:221] Checking for updates...
	I1124 13:59:42.424905  203121 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:59:42.428148  203121 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2368/kubeconfig
	I1124 13:59:42.431322  203121 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2368/.minikube
	I1124 13:59:42.434379  203121 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 13:59:42.438100  203121 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:59:42.442160  203121 config.go:182] Loaded profile config "cert-expiration-865605": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:59:42.442285  203121 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:59:42.470073  203121 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 13:59:42.470195  203121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:59:42.532782  203121 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 13:59:42.52123261 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 13:59:42.532892  203121 docker.go:319] overlay module found
	I1124 13:59:42.536185  203121 out.go:179] * Using the docker driver based on user configuration
	I1124 13:59:42.539175  203121 start.go:309] selected driver: docker
	I1124 13:59:42.539208  203121 start.go:927] validating driver "docker" against <nil>
	I1124 13:59:42.539232  203121 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:59:42.540233  203121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:59:42.601740  203121 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 13:59:42.592481576 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 13:59:42.601887  203121 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:59:42.602115  203121 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:59:42.605231  203121 out.go:179] * Using Docker driver with root privileges
	I1124 13:59:42.608204  203121 cni.go:84] Creating CNI manager for ""
	I1124 13:59:42.608281  203121 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:59:42.608296  203121 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 13:59:42.608380  203121 start.go:353] cluster config:
	{Name:old-k8s-version-318786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-318786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:59:42.611704  203121 out.go:179] * Starting "old-k8s-version-318786" primary control-plane node in "old-k8s-version-318786" cluster
	I1124 13:59:42.614615  203121 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 13:59:42.617691  203121 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 13:59:42.620619  203121 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 13:59:42.620699  203121 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1124 13:59:42.620716  203121 cache.go:65] Caching tarball of preloaded images
	I1124 13:59:42.620714  203121 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 13:59:42.620820  203121 preload.go:238] Found /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1124 13:59:42.620838  203121 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1124 13:59:42.620958  203121 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/config.json ...
	I1124 13:59:42.620983  203121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/config.json: {Name:mkdbbadabe7d23b9f104ff19d81818950111a382 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:42.640749  203121 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 13:59:42.640776  203121 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 13:59:42.640802  203121 cache.go:240] Successfully downloaded all kic artifacts
	I1124 13:59:42.640833  203121 start.go:360] acquireMachinesLock for old-k8s-version-318786: {Name:mkda208a8325231a646a1a7f876724cc4fca17ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:59:42.640958  203121 start.go:364] duration metric: took 103.057µs to acquireMachinesLock for "old-k8s-version-318786"
	I1124 13:59:42.640986  203121 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-318786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-318786 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 13:59:42.641059  203121 start.go:125] createHost starting for "" (driver="docker")
	I1124 13:59:42.644471  203121 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 13:59:42.644694  203121 start.go:159] libmachine.API.Create for "old-k8s-version-318786" (driver="docker")
	I1124 13:59:42.644747  203121 client.go:173] LocalClient.Create starting
	I1124 13:59:42.644827  203121 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem
	I1124 13:59:42.644867  203121 main.go:143] libmachine: Decoding PEM data...
	I1124 13:59:42.644888  203121 main.go:143] libmachine: Parsing certificate...
	I1124 13:59:42.644949  203121 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem
	I1124 13:59:42.644971  203121 main.go:143] libmachine: Decoding PEM data...
	I1124 13:59:42.644986  203121 main.go:143] libmachine: Parsing certificate...
	I1124 13:59:42.645338  203121 cli_runner.go:164] Run: docker network inspect old-k8s-version-318786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 13:59:42.661505  203121 cli_runner.go:211] docker network inspect old-k8s-version-318786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 13:59:42.661596  203121 network_create.go:284] running [docker network inspect old-k8s-version-318786] to gather additional debugging logs...
	I1124 13:59:42.661615  203121 cli_runner.go:164] Run: docker network inspect old-k8s-version-318786
	W1124 13:59:42.677608  203121 cli_runner.go:211] docker network inspect old-k8s-version-318786 returned with exit code 1
	I1124 13:59:42.677643  203121 network_create.go:287] error running [docker network inspect old-k8s-version-318786]: docker network inspect old-k8s-version-318786: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-318786 not found
	I1124 13:59:42.677659  203121 network_create.go:289] output of [docker network inspect old-k8s-version-318786]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-318786 not found
	
	** /stderr **
	I1124 13:59:42.677758  203121 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:59:42.694925  203121 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e5e15b13860d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:3d:37:c4:cc:77} reservation:<nil>}
	I1124 13:59:42.695253  203121 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-66593a990bce IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:c0:9b:bc:41:ca} reservation:<nil>}
	I1124 13:59:42.695642  203121 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-37e9fb0954cf IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:32:0b:6f:6e:b2:8c} reservation:<nil>}
	I1124 13:59:42.695904  203121 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-5977b32dc412 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:82:75:42:7c:e9:e6} reservation:<nil>}
	I1124 13:59:42.696411  203121 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019bcfe0}
	I1124 13:59:42.696437  203121 network_create.go:124] attempt to create docker network old-k8s-version-318786 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1124 13:59:42.696498  203121 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-318786 old-k8s-version-318786
	I1124 13:59:42.754268  203121 network_create.go:108] docker network old-k8s-version-318786 192.168.85.0/24 created
	I1124 13:59:42.754297  203121 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-318786" container
	I1124 13:59:42.754382  203121 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 13:59:42.771474  203121 cli_runner.go:164] Run: docker volume create old-k8s-version-318786 --label name.minikube.sigs.k8s.io=old-k8s-version-318786 --label created_by.minikube.sigs.k8s.io=true
	I1124 13:59:42.789916  203121 oci.go:103] Successfully created a docker volume old-k8s-version-318786
	I1124 13:59:42.790028  203121 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-318786-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-318786 --entrypoint /usr/bin/test -v old-k8s-version-318786:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 13:59:43.376934  203121 oci.go:107] Successfully prepared a docker volume old-k8s-version-318786
	I1124 13:59:43.377002  203121 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 13:59:43.377014  203121 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 13:59:43.377093  203121 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-318786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 13:59:48.629782  203121 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-318786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.252643456s)
	I1124 13:59:48.629823  203121 kic.go:203] duration metric: took 5.252805903s to extract preloaded images to volume ...
	W1124 13:59:48.629966  203121 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 13:59:48.630073  203121 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 13:59:48.692534  203121 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-318786 --name old-k8s-version-318786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-318786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-318786 --network old-k8s-version-318786 --ip 192.168.85.2 --volume old-k8s-version-318786:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 13:59:49.023181  203121 cli_runner.go:164] Run: docker container inspect old-k8s-version-318786 --format={{.State.Running}}
	I1124 13:59:49.046529  203121 cli_runner.go:164] Run: docker container inspect old-k8s-version-318786 --format={{.State.Status}}
	I1124 13:59:49.073693  203121 cli_runner.go:164] Run: docker exec old-k8s-version-318786 stat /var/lib/dpkg/alternatives/iptables
	I1124 13:59:49.143680  203121 oci.go:144] the created container "old-k8s-version-318786" has a running status.
	I1124 13:59:49.143714  203121 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-2368/.minikube/machines/old-k8s-version-318786/id_rsa...
	I1124 13:59:49.471341  203121 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-2368/.minikube/machines/old-k8s-version-318786/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 13:59:49.501921  203121 cli_runner.go:164] Run: docker container inspect old-k8s-version-318786 --format={{.State.Status}}
	I1124 13:59:49.532238  203121 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 13:59:49.532267  203121 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-318786 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 13:59:49.607023  203121 cli_runner.go:164] Run: docker container inspect old-k8s-version-318786 --format={{.State.Status}}
	I1124 13:59:49.637450  203121 machine.go:94] provisionDockerMachine start ...
	I1124 13:59:49.637558  203121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-318786
	I1124 13:59:49.663172  203121 main.go:143] libmachine: Using SSH client type: native
	I1124 13:59:49.663576  203121 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1124 13:59:49.663586  203121 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 13:59:49.666892  203121 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 13:59:52.819647  203121 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-318786
	
	I1124 13:59:52.819716  203121 ubuntu.go:182] provisioning hostname "old-k8s-version-318786"
	I1124 13:59:52.819805  203121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-318786
	I1124 13:59:52.837381  203121 main.go:143] libmachine: Using SSH client type: native
	I1124 13:59:52.837693  203121 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1124 13:59:52.837710  203121 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-318786 && echo "old-k8s-version-318786" | sudo tee /etc/hostname
	I1124 13:59:53.001525  203121 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-318786
	
	I1124 13:59:53.001631  203121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-318786
	I1124 13:59:53.024082  203121 main.go:143] libmachine: Using SSH client type: native
	I1124 13:59:53.024554  203121 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1124 13:59:53.024610  203121 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-318786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-318786/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-318786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 13:59:53.180483  203121 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 13:59:53.180555  203121 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-2368/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-2368/.minikube}
	I1124 13:59:53.180601  203121 ubuntu.go:190] setting up certificates
	I1124 13:59:53.180641  203121 provision.go:84] configureAuth start
	I1124 13:59:53.180754  203121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-318786
	I1124 13:59:53.197870  203121 provision.go:143] copyHostCerts
	I1124 13:59:53.197937  203121 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem, removing ...
	I1124 13:59:53.197947  203121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem
	I1124 13:59:53.198026  203121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem (1679 bytes)
	I1124 13:59:53.198115  203121 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem, removing ...
	I1124 13:59:53.198120  203121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem
	I1124 13:59:53.198145  203121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem (1082 bytes)
	I1124 13:59:53.198195  203121 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem, removing ...
	I1124 13:59:53.198199  203121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem
	I1124 13:59:53.198221  203121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem (1123 bytes)
	I1124 13:59:53.198264  203121 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-318786 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-318786]
	I1124 13:59:53.447750  203121 provision.go:177] copyRemoteCerts
	I1124 13:59:53.447821  203121 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 13:59:53.447859  203121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-318786
	I1124 13:59:53.466989  203121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/old-k8s-version-318786/id_rsa Username:docker}
	I1124 13:59:53.573838  203121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 13:59:53.593131  203121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1124 13:59:53.614562  203121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 13:59:53.634677  203121 provision.go:87] duration metric: took 453.994052ms to configureAuth
	I1124 13:59:53.634716  203121 ubuntu.go:206] setting minikube options for container-runtime
	I1124 13:59:53.634894  203121 config.go:182] Loaded profile config "old-k8s-version-318786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 13:59:53.634916  203121 machine.go:97] duration metric: took 3.997446316s to provisionDockerMachine
	I1124 13:59:53.634923  203121 client.go:176] duration metric: took 10.990163165s to LocalClient.Create
	I1124 13:59:53.634942  203121 start.go:167] duration metric: took 10.990248318s to libmachine.API.Create "old-k8s-version-318786"
	I1124 13:59:53.634951  203121 start.go:293] postStartSetup for "old-k8s-version-318786" (driver="docker")
	I1124 13:59:53.634967  203121 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 13:59:53.635028  203121 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 13:59:53.635072  203121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-318786
	I1124 13:59:53.651615  203121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/old-k8s-version-318786/id_rsa Username:docker}
	I1124 13:59:53.760351  203121 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 13:59:53.763787  203121 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 13:59:53.763818  203121 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 13:59:53.763831  203121 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2368/.minikube/addons for local assets ...
	I1124 13:59:53.763886  203121 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2368/.minikube/files for local assets ...
	I1124 13:59:53.764002  203121 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem -> 41782.pem in /etc/ssl/certs
	I1124 13:59:53.764116  203121 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 13:59:53.771607  203121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem --> /etc/ssl/certs/41782.pem (1708 bytes)
	I1124 13:59:53.790229  203121 start.go:296] duration metric: took 155.256983ms for postStartSetup
	I1124 13:59:53.790653  203121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-318786
	I1124 13:59:53.807439  203121 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/config.json ...
	I1124 13:59:53.807757  203121 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:59:53.807816  203121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-318786
	I1124 13:59:53.825527  203121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/old-k8s-version-318786/id_rsa Username:docker}
	I1124 13:59:53.928742  203121 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 13:59:53.933408  203121 start.go:128] duration metric: took 11.29232535s to createHost
	I1124 13:59:53.933433  203121 start.go:83] releasing machines lock for "old-k8s-version-318786", held for 11.292464025s
	I1124 13:59:53.933507  203121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-318786
	I1124 13:59:53.950335  203121 ssh_runner.go:195] Run: cat /version.json
	I1124 13:59:53.950395  203121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-318786
	I1124 13:59:53.950688  203121 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 13:59:53.950748  203121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-318786
	I1124 13:59:53.969960  203121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/old-k8s-version-318786/id_rsa Username:docker}
	I1124 13:59:53.970283  203121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/old-k8s-version-318786/id_rsa Username:docker}
	I1124 13:59:54.187220  203121 ssh_runner.go:195] Run: systemctl --version
	I1124 13:59:54.193763  203121 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 13:59:54.197792  203121 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 13:59:54.197862  203121 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 13:59:54.225219  203121 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 13:59:54.225241  203121 start.go:496] detecting cgroup driver to use...
	I1124 13:59:54.225273  203121 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 13:59:54.225319  203121 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 13:59:54.240905  203121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 13:59:54.255129  203121 docker.go:218] disabling cri-docker service (if available) ...
	I1124 13:59:54.255221  203121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 13:59:54.274287  203121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 13:59:54.293183  203121 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 13:59:54.421827  203121 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 13:59:54.546597  203121 docker.go:234] disabling docker service ...
	I1124 13:59:54.546687  203121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 13:59:54.569497  203121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 13:59:54.583215  203121 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 13:59:54.700724  203121 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 13:59:54.819165  203121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 13:59:54.832231  203121 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 13:59:54.851866  203121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1124 13:59:54.862178  203121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 13:59:54.871620  203121 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1124 13:59:54.871738  203121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1124 13:59:54.882231  203121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 13:59:54.891717  203121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 13:59:54.901467  203121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 13:59:54.910294  203121 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 13:59:54.918660  203121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 13:59:54.927868  203121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 13:59:54.937082  203121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 13:59:54.946216  203121 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 13:59:54.954056  203121 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 13:59:54.961958  203121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:59:55.097492  203121 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 13:59:55.230526  203121 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 13:59:55.230649  203121 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 13:59:55.234998  203121 start.go:564] Will wait 60s for crictl version
	I1124 13:59:55.235132  203121 ssh_runner.go:195] Run: which crictl
	I1124 13:59:55.238882  203121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 13:59:55.268214  203121 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 13:59:55.268356  203121 ssh_runner.go:195] Run: containerd --version
	I1124 13:59:55.288303  203121 ssh_runner.go:195] Run: containerd --version
	I1124 13:59:55.314523  203121 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1124 13:59:55.317381  203121 cli_runner.go:164] Run: docker network inspect old-k8s-version-318786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:59:55.334289  203121 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 13:59:55.338412  203121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:59:55.348875  203121 kubeadm.go:884] updating cluster {Name:old-k8s-version-318786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-318786 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 13:59:55.349007  203121 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 13:59:55.349078  203121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:59:55.373604  203121 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 13:59:55.373629  203121 containerd.go:534] Images already preloaded, skipping extraction
	I1124 13:59:55.373693  203121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:59:55.398685  203121 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 13:59:55.398711  203121 cache_images.go:86] Images are preloaded, skipping loading
	I1124 13:59:55.398719  203121 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 containerd true true} ...
	I1124 13:59:55.398825  203121 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-318786 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-318786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 13:59:55.398898  203121 ssh_runner.go:195] Run: sudo crictl info
	I1124 13:59:55.429304  203121 cni.go:84] Creating CNI manager for ""
	I1124 13:59:55.429328  203121 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:59:55.429372  203121 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 13:59:55.429403  203121 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-318786 NodeName:old-k8s-version-318786 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 13:59:55.429550  203121 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-318786"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 13:59:55.429622  203121 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1124 13:59:55.437772  203121 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 13:59:55.437895  203121 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 13:59:55.445856  203121 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1124 13:59:55.459167  203121 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 13:59:55.473519  203121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I1124 13:59:55.487760  203121 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 13:59:55.491722  203121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:59:55.502994  203121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:59:55.625341  203121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:59:55.647018  203121 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786 for IP: 192.168.85.2
	I1124 13:59:55.647099  203121 certs.go:195] generating shared ca certs ...
	I1124 13:59:55.647130  203121 certs.go:227] acquiring lock for ca certs: {Name:mkcd8707c782acde0e57168c044a3df942dc4ab1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:55.647322  203121 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-2368/.minikube/ca.key
	I1124 13:59:55.647396  203121 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.key
	I1124 13:59:55.647432  203121 certs.go:257] generating profile certs ...
	I1124 13:59:55.647513  203121 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/client.key
	I1124 13:59:55.647551  203121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/client.crt with IP's: []
	I1124 13:59:56.033129  203121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/client.crt ...
	I1124 13:59:56.033212  203121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/client.crt: {Name:mk69bb915606644e0645060fa46449dd65f83095 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:56.033449  203121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/client.key ...
	I1124 13:59:56.033488  203121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/client.key: {Name:mkfa34a7c8b2d69c736fc1cfd2304ae49133ac4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:56.033640  203121 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/apiserver.key.251f69ae
	I1124 13:59:56.033684  203121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/apiserver.crt.251f69ae with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 13:59:56.281567  203121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/apiserver.crt.251f69ae ...
	I1124 13:59:56.281598  203121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/apiserver.crt.251f69ae: {Name:mk572ed713bf0eec1d0b840d076729a08786aff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:56.281810  203121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/apiserver.key.251f69ae ...
	I1124 13:59:56.281825  203121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/apiserver.key.251f69ae: {Name:mk6b112bdb309b7ed87e7e056627f1c30ccc769a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:56.281918  203121 certs.go:382] copying /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/apiserver.crt.251f69ae -> /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/apiserver.crt
	I1124 13:59:56.281996  203121 certs.go:386] copying /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/apiserver.key.251f69ae -> /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/apiserver.key
	I1124 13:59:56.282057  203121 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/proxy-client.key
	I1124 13:59:56.282077  203121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/proxy-client.crt with IP's: []
	I1124 13:59:56.404952  203121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/proxy-client.crt ...
	I1124 13:59:56.404984  203121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/proxy-client.crt: {Name:mk506f5bcd13da36d0e32b27db8471ef560cbc9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:56.405167  203121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/proxy-client.key ...
	I1124 13:59:56.405182  203121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/proxy-client.key: {Name:mk63bb1c02064c41d85f1d8bf24cb0b4a26d687a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:56.405366  203121 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178.pem (1338 bytes)
	W1124 13:59:56.405416  203121 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178_empty.pem, impossibly tiny 0 bytes
	I1124 13:59:56.405425  203121 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 13:59:56.405454  203121 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem (1082 bytes)
	I1124 13:59:56.405487  203121 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem (1123 bytes)
	I1124 13:59:56.405517  203121 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem (1679 bytes)
	I1124 13:59:56.405566  203121 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem (1708 bytes)
	I1124 13:59:56.406138  203121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 13:59:56.425371  203121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 13:59:56.445638  203121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 13:59:56.465263  203121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 13:59:56.484806  203121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1124 13:59:56.503229  203121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 13:59:56.526839  203121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 13:59:56.546328  203121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 13:59:56.568330  203121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178.pem --> /usr/share/ca-certificates/4178.pem (1338 bytes)
	I1124 13:59:56.588914  203121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem --> /usr/share/ca-certificates/41782.pem (1708 bytes)
	I1124 13:59:56.609069  203121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 13:59:56.635519  203121 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 13:59:56.649021  203121 ssh_runner.go:195] Run: openssl version
	I1124 13:59:56.655362  203121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41782.pem && ln -fs /usr/share/ca-certificates/41782.pem /etc/ssl/certs/41782.pem"
	I1124 13:59:56.664078  203121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41782.pem
	I1124 13:59:56.667939  203121 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:21 /usr/share/ca-certificates/41782.pem
	I1124 13:59:56.668018  203121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41782.pem
	I1124 13:59:56.709276  203121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 13:59:56.717713  203121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 13:59:56.725687  203121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:59:56.729416  203121 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:59:56.729511  203121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:59:56.771028  203121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 13:59:56.779345  203121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4178.pem && ln -fs /usr/share/ca-certificates/4178.pem /etc/ssl/certs/4178.pem"
	I1124 13:59:56.787738  203121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4178.pem
	I1124 13:59:56.792183  203121 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:21 /usr/share/ca-certificates/4178.pem
	I1124 13:59:56.792289  203121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4178.pem
	I1124 13:59:56.833374  203121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4178.pem /etc/ssl/certs/51391683.0"
	I1124 13:59:56.841910  203121 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 13:59:56.845538  203121 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 13:59:56.845595  203121 kubeadm.go:401] StartCluster: {Name:old-k8s-version-318786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-318786 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:59:56.845673  203121 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 13:59:56.845734  203121 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:59:56.874154  203121 cri.go:89] found id: ""
	I1124 13:59:56.874225  203121 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 13:59:56.882169  203121 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 13:59:56.890196  203121 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 13:59:56.890264  203121 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 13:59:56.898559  203121 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 13:59:56.898579  203121 kubeadm.go:158] found existing configuration files:
	
	I1124 13:59:56.898629  203121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 13:59:56.906476  203121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 13:59:56.906616  203121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 13:59:56.914551  203121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 13:59:56.922673  203121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 13:59:56.922748  203121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 13:59:56.931103  203121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 13:59:56.939465  203121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 13:59:56.939567  203121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 13:59:56.947086  203121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 13:59:56.955210  203121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 13:59:56.955302  203121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 13:59:56.963184  203121 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 13:59:57.020344  203121 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1124 13:59:57.020647  203121 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 13:59:57.059971  203121 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 13:59:57.060049  203121 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 13:59:57.060090  203121 kubeadm.go:319] OS: Linux
	I1124 13:59:57.060146  203121 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 13:59:57.060199  203121 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 13:59:57.060249  203121 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 13:59:57.060302  203121 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 13:59:57.060354  203121 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 13:59:57.060407  203121 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 13:59:57.060457  203121 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 13:59:57.060509  203121 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 13:59:57.060558  203121 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 13:59:57.153578  203121 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 13:59:57.153733  203121 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 13:59:57.153905  203121 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1124 13:59:57.330900  203121 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 13:59:57.336703  203121 out.go:252]   - Generating certificates and keys ...
	I1124 13:59:57.336796  203121 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 13:59:57.336870  203121 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 13:59:57.865889  203121 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 13:59:58.185353  203121 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 13:59:59.130735  203121 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 13:59:59.642294  203121 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 13:59:59.906079  203121 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 13:59:59.906451  203121 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-318786] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 14:00:00.123407  203121 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 14:00:00.123551  203121 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-318786] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 14:00:00.270762  203121 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 14:00:01.217860  203121 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 14:00:01.724986  203121 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 14:00:01.740375  203121 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 14:00:02.910438  203121 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 14:00:03.183161  203121 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 14:00:03.857453  203121 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 14:00:04.272263  203121 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 14:00:04.273275  203121 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 14:00:04.276092  203121 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 14:00:04.279552  203121 out.go:252]   - Booting up control plane ...
	I1124 14:00:04.279655  203121 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 14:00:04.279733  203121 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 14:00:04.279800  203121 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 14:00:04.298211  203121 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 14:00:04.298994  203121 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 14:00:04.299292  203121 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 14:00:04.444292  203121 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1124 14:00:11.451050  203121 kubeadm.go:319] [apiclient] All control plane components are healthy after 7.006841 seconds
	I1124 14:00:11.451179  203121 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 14:00:11.470632  203121 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 14:00:12.039593  203121 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 14:00:12.039804  203121 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-318786 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 14:00:12.553048  203121 kubeadm.go:319] [bootstrap-token] Using token: lzgex3.uugtb4pr04721m2a
	I1124 14:00:12.555996  203121 out.go:252]   - Configuring RBAC rules ...
	I1124 14:00:12.556126  203121 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 14:00:12.561765  203121 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 14:00:12.571340  203121 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 14:00:12.578855  203121 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 14:00:12.583307  203121 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 14:00:12.587409  203121 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 14:00:12.604367  203121 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 14:00:12.927512  203121 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 14:00:12.998711  203121 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 14:00:13.009044  203121 kubeadm.go:319] 
	I1124 14:00:13.009136  203121 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 14:00:13.009150  203121 kubeadm.go:319] 
	I1124 14:00:13.009228  203121 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 14:00:13.009237  203121 kubeadm.go:319] 
	I1124 14:00:13.009262  203121 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 14:00:13.009867  203121 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 14:00:13.009932  203121 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 14:00:13.009946  203121 kubeadm.go:319] 
	I1124 14:00:13.010001  203121 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 14:00:13.010013  203121 kubeadm.go:319] 
	I1124 14:00:13.010061  203121 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 14:00:13.010069  203121 kubeadm.go:319] 
	I1124 14:00:13.010122  203121 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 14:00:13.010202  203121 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 14:00:13.010274  203121 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 14:00:13.010281  203121 kubeadm.go:319] 
	I1124 14:00:13.010670  203121 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 14:00:13.010761  203121 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 14:00:13.010771  203121 kubeadm.go:319] 
	I1124 14:00:13.011083  203121 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token lzgex3.uugtb4pr04721m2a \
	I1124 14:00:13.011197  203121 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aa948289582a95f47bab77808ca51e5d74f41a914fe1740ab9448815f8011aac \
	I1124 14:00:13.011480  203121 kubeadm.go:319] 	--control-plane 
	I1124 14:00:13.011502  203121 kubeadm.go:319] 
	I1124 14:00:13.011780  203121 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 14:00:13.011795  203121 kubeadm.go:319] 
	I1124 14:00:13.012105  203121 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token lzgex3.uugtb4pr04721m2a \
	I1124 14:00:13.012432  203121 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aa948289582a95f47bab77808ca51e5d74f41a914fe1740ab9448815f8011aac 
	I1124 14:00:13.016246  203121 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 14:00:13.016372  203121 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 14:00:13.016396  203121 cni.go:84] Creating CNI manager for ""
	I1124 14:00:13.016409  203121 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 14:00:13.019688  203121 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 14:00:13.022683  203121 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 14:00:13.034888  203121 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1124 14:00:13.034906  203121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 14:00:13.059514  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 14:00:14.290955  203121 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.231358047s)
	I1124 14:00:14.291008  203121 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 14:00:14.291124  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:14.291189  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-318786 minikube.k8s.io/updated_at=2025_11_24T14_00_14_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=old-k8s-version-318786 minikube.k8s.io/primary=true
	I1124 14:00:14.451653  203121 ops.go:34] apiserver oom_adj: -16
	I1124 14:00:14.451772  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:14.952612  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:15.452444  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:15.952508  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:16.452482  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:16.952838  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:17.452425  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:17.951984  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:18.452384  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:18.952884  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:19.452844  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:19.951825  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:20.452041  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:20.954241  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:21.452323  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:21.952432  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:22.451804  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:22.951865  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:23.452374  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:23.952376  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:24.452544  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:24.952573  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:25.451889  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:25.951879  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:26.060844  203121 kubeadm.go:1114] duration metric: took 11.769763814s to wait for elevateKubeSystemPrivileges
	I1124 14:00:26.060873  203121 kubeadm.go:403] duration metric: took 29.215284106s to StartCluster
	I1124 14:00:26.060891  203121 settings.go:142] acquiring lock: {Name:mk2b0bbff4d8ced468f457362668d43b813dc062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:26.060955  203121 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-2368/kubeconfig
	I1124 14:00:26.061937  203121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/kubeconfig: {Name:mk246d21eaffbd8aca2abdc1b2f89d6fcc902f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:26.062157  203121 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 14:00:26.062320  203121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 14:00:26.062598  203121 config.go:182] Loaded profile config "old-k8s-version-318786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 14:00:26.062635  203121 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:00:26.062693  203121 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-318786"
	I1124 14:00:26.062708  203121 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-318786"
	I1124 14:00:26.062728  203121 host.go:66] Checking if "old-k8s-version-318786" exists ...
	I1124 14:00:26.063138  203121 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-318786"
	I1124 14:00:26.063163  203121 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-318786"
	I1124 14:00:26.063454  203121 cli_runner.go:164] Run: docker container inspect old-k8s-version-318786 --format={{.State.Status}}
	I1124 14:00:26.063514  203121 cli_runner.go:164] Run: docker container inspect old-k8s-version-318786 --format={{.State.Status}}
	I1124 14:00:26.066058  203121 out.go:179] * Verifying Kubernetes components...
	I1124 14:00:26.069103  203121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:00:26.111201  203121 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 14:00:26.116253  203121 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-318786"
	I1124 14:00:26.116292  203121 host.go:66] Checking if "old-k8s-version-318786" exists ...
	I1124 14:00:26.116709  203121 cli_runner.go:164] Run: docker container inspect old-k8s-version-318786 --format={{.State.Status}}
	I1124 14:00:26.116830  203121 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:00:26.116844  203121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:00:26.116892  203121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-318786
	I1124 14:00:26.150658  203121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/old-k8s-version-318786/id_rsa Username:docker}
	I1124 14:00:26.161222  203121 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 14:00:26.161243  203121 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 14:00:26.161315  203121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-318786
	I1124 14:00:26.189630  203121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/old-k8s-version-318786/id_rsa Username:docker}
	I1124 14:00:26.424488  203121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 14:00:26.425624  203121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:00:26.485066  203121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:00:26.513639  203121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 14:00:27.429647  203121 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.003904202s)
	I1124 14:00:27.430570  203121 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-318786" to be "Ready" ...
	I1124 14:00:27.431403  203121 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.005751061s)
	I1124 14:00:27.431468  203121 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 14:00:27.809398  203121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.324299578s)
	I1124 14:00:27.809491  203121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.295828834s)
	I1124 14:00:27.819279  203121 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 14:00:27.822260  203121 addons.go:530] duration metric: took 1.759614941s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 14:00:27.936206  203121 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-318786" context rescaled to 1 replicas
	W1124 14:00:29.434630  203121 node_ready.go:57] node "old-k8s-version-318786" has "Ready":"False" status (will retry)
	W1124 14:00:31.933623  203121 node_ready.go:57] node "old-k8s-version-318786" has "Ready":"False" status (will retry)
	W1124 14:00:33.934195  203121 node_ready.go:57] node "old-k8s-version-318786" has "Ready":"False" status (will retry)
	W1124 14:00:36.434086  203121 node_ready.go:57] node "old-k8s-version-318786" has "Ready":"False" status (will retry)
	W1124 14:00:38.434475  203121 node_ready.go:57] node "old-k8s-version-318786" has "Ready":"False" status (will retry)
	I1124 14:00:39.437064  203121 node_ready.go:49] node "old-k8s-version-318786" is "Ready"
	I1124 14:00:39.437091  203121 node_ready.go:38] duration metric: took 12.006466784s for node "old-k8s-version-318786" to be "Ready" ...
	I1124 14:00:39.437104  203121 api_server.go:52] waiting for apiserver process to appear ...
	I1124 14:00:39.437165  203121 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:00:39.453227  203121 api_server.go:72] duration metric: took 13.391041621s to wait for apiserver process to appear ...
	I1124 14:00:39.453251  203121 api_server.go:88] waiting for apiserver healthz status ...
	I1124 14:00:39.453271  203121 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 14:00:39.462068  203121 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 14:00:39.463530  203121 api_server.go:141] control plane version: v1.28.0
	I1124 14:00:39.463554  203121 api_server.go:131] duration metric: took 10.295662ms to wait for apiserver health ...
	I1124 14:00:39.463563  203121 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 14:00:39.467352  203121 system_pods.go:59] 8 kube-system pods found
	I1124 14:00:39.467391  203121 system_pods.go:61] "coredns-5dd5756b68-n7s8h" [72202b02-1ca2-4c69-ad47-3f1ef90ba8ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:00:39.467397  203121 system_pods.go:61] "etcd-old-k8s-version-318786" [dd78fff3-f901-4dc0-9c77-907dbb69e36d] Running
	I1124 14:00:39.467402  203121 system_pods.go:61] "kindnet-z4rkx" [053d781f-846e-4391-a537-edd057019339] Running
	I1124 14:00:39.467406  203121 system_pods.go:61] "kube-apiserver-old-k8s-version-318786" [7f0596ec-97f5-4a70-974d-38c5d9a51273] Running
	I1124 14:00:39.467410  203121 system_pods.go:61] "kube-controller-manager-old-k8s-version-318786" [4ae0e32a-b5f2-4e37-82d1-d76bfabbedd5] Running
	I1124 14:00:39.467414  203121 system_pods.go:61] "kube-proxy-jwmdg" [11a8b197-dd22-45df-9593-66d16fdefa80] Running
	I1124 14:00:39.467418  203121 system_pods.go:61] "kube-scheduler-old-k8s-version-318786" [01641e80-7a9e-48c2-b9e3-d384beab62d7] Running
	I1124 14:00:39.467423  203121 system_pods.go:61] "storage-provisioner" [2298aa73-9529-42f0-a0ec-22197acfa4ba] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:00:39.467428  203121 system_pods.go:74] duration metric: took 3.859916ms to wait for pod list to return data ...
	I1124 14:00:39.467435  203121 default_sa.go:34] waiting for default service account to be created ...
	I1124 14:00:39.470040  203121 default_sa.go:45] found service account: "default"
	I1124 14:00:39.470060  203121 default_sa.go:55] duration metric: took 2.619768ms for default service account to be created ...
	I1124 14:00:39.470070  203121 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 14:00:39.473490  203121 system_pods.go:86] 8 kube-system pods found
	I1124 14:00:39.473522  203121 system_pods.go:89] "coredns-5dd5756b68-n7s8h" [72202b02-1ca2-4c69-ad47-3f1ef90ba8ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:00:39.473528  203121 system_pods.go:89] "etcd-old-k8s-version-318786" [dd78fff3-f901-4dc0-9c77-907dbb69e36d] Running
	I1124 14:00:39.473534  203121 system_pods.go:89] "kindnet-z4rkx" [053d781f-846e-4391-a537-edd057019339] Running
	I1124 14:00:39.473539  203121 system_pods.go:89] "kube-apiserver-old-k8s-version-318786" [7f0596ec-97f5-4a70-974d-38c5d9a51273] Running
	I1124 14:00:39.473543  203121 system_pods.go:89] "kube-controller-manager-old-k8s-version-318786" [4ae0e32a-b5f2-4e37-82d1-d76bfabbedd5] Running
	I1124 14:00:39.473547  203121 system_pods.go:89] "kube-proxy-jwmdg" [11a8b197-dd22-45df-9593-66d16fdefa80] Running
	I1124 14:00:39.473552  203121 system_pods.go:89] "kube-scheduler-old-k8s-version-318786" [01641e80-7a9e-48c2-b9e3-d384beab62d7] Running
	I1124 14:00:39.473558  203121 system_pods.go:89] "storage-provisioner" [2298aa73-9529-42f0-a0ec-22197acfa4ba] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:00:39.473585  203121 retry.go:31] will retry after 299.487693ms: missing components: kube-dns
	I1124 14:00:39.780995  203121 system_pods.go:86] 8 kube-system pods found
	I1124 14:00:39.781029  203121 system_pods.go:89] "coredns-5dd5756b68-n7s8h" [72202b02-1ca2-4c69-ad47-3f1ef90ba8ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:00:39.781036  203121 system_pods.go:89] "etcd-old-k8s-version-318786" [dd78fff3-f901-4dc0-9c77-907dbb69e36d] Running
	I1124 14:00:39.781043  203121 system_pods.go:89] "kindnet-z4rkx" [053d781f-846e-4391-a537-edd057019339] Running
	I1124 14:00:39.781047  203121 system_pods.go:89] "kube-apiserver-old-k8s-version-318786" [7f0596ec-97f5-4a70-974d-38c5d9a51273] Running
	I1124 14:00:39.781051  203121 system_pods.go:89] "kube-controller-manager-old-k8s-version-318786" [4ae0e32a-b5f2-4e37-82d1-d76bfabbedd5] Running
	I1124 14:00:39.781055  203121 system_pods.go:89] "kube-proxy-jwmdg" [11a8b197-dd22-45df-9593-66d16fdefa80] Running
	I1124 14:00:39.781061  203121 system_pods.go:89] "kube-scheduler-old-k8s-version-318786" [01641e80-7a9e-48c2-b9e3-d384beab62d7] Running
	I1124 14:00:39.781067  203121 system_pods.go:89] "storage-provisioner" [2298aa73-9529-42f0-a0ec-22197acfa4ba] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:00:39.781080  203121 retry.go:31] will retry after 249.97776ms: missing components: kube-dns
	I1124 14:00:40.063092  203121 system_pods.go:86] 8 kube-system pods found
	I1124 14:00:40.063130  203121 system_pods.go:89] "coredns-5dd5756b68-n7s8h" [72202b02-1ca2-4c69-ad47-3f1ef90ba8ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:00:40.063139  203121 system_pods.go:89] "etcd-old-k8s-version-318786" [dd78fff3-f901-4dc0-9c77-907dbb69e36d] Running
	I1124 14:00:40.063145  203121 system_pods.go:89] "kindnet-z4rkx" [053d781f-846e-4391-a537-edd057019339] Running
	I1124 14:00:40.063149  203121 system_pods.go:89] "kube-apiserver-old-k8s-version-318786" [7f0596ec-97f5-4a70-974d-38c5d9a51273] Running
	I1124 14:00:40.063180  203121 system_pods.go:89] "kube-controller-manager-old-k8s-version-318786" [4ae0e32a-b5f2-4e37-82d1-d76bfabbedd5] Running
	I1124 14:00:40.063193  203121 system_pods.go:89] "kube-proxy-jwmdg" [11a8b197-dd22-45df-9593-66d16fdefa80] Running
	I1124 14:00:40.063198  203121 system_pods.go:89] "kube-scheduler-old-k8s-version-318786" [01641e80-7a9e-48c2-b9e3-d384beab62d7] Running
	I1124 14:00:40.063203  203121 system_pods.go:89] "storage-provisioner" [2298aa73-9529-42f0-a0ec-22197acfa4ba] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:00:40.063219  203121 retry.go:31] will retry after 436.268576ms: missing components: kube-dns
	I1124 14:00:40.504166  203121 system_pods.go:86] 8 kube-system pods found
	I1124 14:00:40.504245  203121 system_pods.go:89] "coredns-5dd5756b68-n7s8h" [72202b02-1ca2-4c69-ad47-3f1ef90ba8ba] Running
	I1124 14:00:40.504259  203121 system_pods.go:89] "etcd-old-k8s-version-318786" [dd78fff3-f901-4dc0-9c77-907dbb69e36d] Running
	I1124 14:00:40.504264  203121 system_pods.go:89] "kindnet-z4rkx" [053d781f-846e-4391-a537-edd057019339] Running
	I1124 14:00:40.504269  203121 system_pods.go:89] "kube-apiserver-old-k8s-version-318786" [7f0596ec-97f5-4a70-974d-38c5d9a51273] Running
	I1124 14:00:40.504274  203121 system_pods.go:89] "kube-controller-manager-old-k8s-version-318786" [4ae0e32a-b5f2-4e37-82d1-d76bfabbedd5] Running
	I1124 14:00:40.504279  203121 system_pods.go:89] "kube-proxy-jwmdg" [11a8b197-dd22-45df-9593-66d16fdefa80] Running
	I1124 14:00:40.504283  203121 system_pods.go:89] "kube-scheduler-old-k8s-version-318786" [01641e80-7a9e-48c2-b9e3-d384beab62d7] Running
	I1124 14:00:40.504287  203121 system_pods.go:89] "storage-provisioner" [2298aa73-9529-42f0-a0ec-22197acfa4ba] Running
	I1124 14:00:40.504296  203121 system_pods.go:126] duration metric: took 1.034219513s to wait for k8s-apps to be running ...
	I1124 14:00:40.504307  203121 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 14:00:40.504364  203121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:00:40.517880  203121 system_svc.go:56] duration metric: took 13.563315ms WaitForService to wait for kubelet
	I1124 14:00:40.517964  203121 kubeadm.go:587] duration metric: took 14.455781279s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:00:40.517991  203121 node_conditions.go:102] verifying NodePressure condition ...
	I1124 14:00:40.520930  203121 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 14:00:40.520963  203121 node_conditions.go:123] node cpu capacity is 2
	I1124 14:00:40.520978  203121 node_conditions.go:105] duration metric: took 2.980003ms to run NodePressure ...
	I1124 14:00:40.520990  203121 start.go:242] waiting for startup goroutines ...
	I1124 14:00:40.520998  203121 start.go:247] waiting for cluster config update ...
	I1124 14:00:40.521010  203121 start.go:256] writing updated cluster config ...
	I1124 14:00:40.521298  203121 ssh_runner.go:195] Run: rm -f paused
	I1124 14:00:40.525324  203121 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:00:40.529797  203121 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-n7s8h" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:00:40.535607  203121 pod_ready.go:94] pod "coredns-5dd5756b68-n7s8h" is "Ready"
	I1124 14:00:40.535639  203121 pod_ready.go:86] duration metric: took 5.816258ms for pod "coredns-5dd5756b68-n7s8h" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:00:40.539181  203121 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-318786" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:00:40.544657  203121 pod_ready.go:94] pod "etcd-old-k8s-version-318786" is "Ready"
	I1124 14:00:40.544685  203121 pod_ready.go:86] duration metric: took 5.478924ms for pod "etcd-old-k8s-version-318786" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:00:40.548165  203121 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-318786" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:00:40.553506  203121 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-318786" is "Ready"
	I1124 14:00:40.553538  203121 pod_ready.go:86] duration metric: took 5.343284ms for pod "kube-apiserver-old-k8s-version-318786" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:00:40.556924  203121 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-318786" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:00:40.929692  203121 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-318786" is "Ready"
	I1124 14:00:40.929725  203121 pod_ready.go:86] duration metric: took 372.7723ms for pod "kube-controller-manager-old-k8s-version-318786" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:00:41.130990  203121 pod_ready.go:83] waiting for pod "kube-proxy-jwmdg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:00:41.530005  203121 pod_ready.go:94] pod "kube-proxy-jwmdg" is "Ready"
	I1124 14:00:41.530034  203121 pod_ready.go:86] duration metric: took 399.016962ms for pod "kube-proxy-jwmdg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:00:41.730026  203121 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-318786" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:00:42.131071  203121 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-318786" is "Ready"
	I1124 14:00:42.131114  203121 pod_ready.go:86] duration metric: took 401.061008ms for pod "kube-scheduler-old-k8s-version-318786" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:00:42.131129  203121 pod_ready.go:40] duration metric: took 1.60575817s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:00:42.200914  203121 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1124 14:00:42.204172  203121 out.go:203] 
	W1124 14:00:42.207213  203121 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 14:00:42.210285  203121 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 14:00:42.214390  203121 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-318786" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	70e558ad037eb       1611cd07b61d5       7 seconds ago       Running             busybox                   0                   f472329e9fd63       busybox                                          default
	33ca9b6d24a80       ba04bb24b9575       13 seconds ago      Running             storage-provisioner       0                   9de766e43deb4       storage-provisioner                              kube-system
	d1e4297a18de5       97e04611ad434       13 seconds ago      Running             coredns                   0                   1b5cb0ca09af2       coredns-5dd5756b68-n7s8h                         kube-system
	8a5ceb46ea7cb       b1a8c6f707935       24 seconds ago      Running             kindnet-cni               0                   e9b4fd516b97c       kindnet-z4rkx                                    kube-system
	e431b25999ece       940f54a5bcae9       26 seconds ago      Running             kube-proxy                0                   68384e9c54fe8       kube-proxy-jwmdg                                 kube-system
	64ea1db6adeec       00543d2fe5d71       46 seconds ago      Running             kube-apiserver            0                   6e10952c6964b       kube-apiserver-old-k8s-version-318786            kube-system
	d422fb0577ca7       46cc66ccc7c19       46 seconds ago      Running             kube-controller-manager   0                   ede8e07dcdc74       kube-controller-manager-old-k8s-version-318786   kube-system
	0769df21ce83c       762dce4090c5f       46 seconds ago      Running             kube-scheduler            0                   6729e51d9cdf6       kube-scheduler-old-k8s-version-318786            kube-system
	a96dcde7b48e2       9cdd6470f48c8       47 seconds ago      Running             etcd                      0                   388ca052bc258       etcd-old-k8s-version-318786                      kube-system
	
	
	==> containerd <==
	Nov 24 14:00:39 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:39.610323957Z" level=info msg="connecting to shim d1e4297a18de5a35eef1e955a0f6b73d8881ba2296e59d8acaed4614dce5de51" address="unix:///run/containerd/s/f62f275e67577be37030e893196dc98d73b2044e58d241d1a7f99ccee4904d24" protocol=ttrpc version=3
	Nov 24 14:00:39 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:39.620434869Z" level=info msg="CreateContainer within sandbox \"9de766e43deb416449962bc7301bab891c72b0af9fb329bb4d8e4ff8ef66bff4\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Nov 24 14:00:39 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:39.644253353Z" level=info msg="Container 33ca9b6d24a80a1f0470355c5dc5bf87df622a7ffd33dad20b3a66e3d42820fb: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 14:00:39 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:39.663968611Z" level=info msg="CreateContainer within sandbox \"9de766e43deb416449962bc7301bab891c72b0af9fb329bb4d8e4ff8ef66bff4\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"33ca9b6d24a80a1f0470355c5dc5bf87df622a7ffd33dad20b3a66e3d42820fb\""
	Nov 24 14:00:39 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:39.665502745Z" level=info msg="StartContainer for \"33ca9b6d24a80a1f0470355c5dc5bf87df622a7ffd33dad20b3a66e3d42820fb\""
	Nov 24 14:00:39 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:39.666531739Z" level=info msg="connecting to shim 33ca9b6d24a80a1f0470355c5dc5bf87df622a7ffd33dad20b3a66e3d42820fb" address="unix:///run/containerd/s/25a7b18f3f0941131e8c32d45d1f9f3bcee38bf8a73b1e3195d36d7532fce44f" protocol=ttrpc version=3
	Nov 24 14:00:39 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:39.711141089Z" level=info msg="StartContainer for \"d1e4297a18de5a35eef1e955a0f6b73d8881ba2296e59d8acaed4614dce5de51\" returns successfully"
	Nov 24 14:00:39 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:39.756705480Z" level=info msg="StartContainer for \"33ca9b6d24a80a1f0470355c5dc5bf87df622a7ffd33dad20b3a66e3d42820fb\" returns successfully"
	Nov 24 14:00:42 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:42.744539553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:f566ecf0-6907-464c-9185-0f1cac06d38f,Namespace:default,Attempt:0,}"
	Nov 24 14:00:42 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:42.796555926Z" level=info msg="connecting to shim f472329e9fd635f4d2ecb8d02d86100f8c593bf1ea6b1e68f6aab8b27bbcb144" address="unix:///run/containerd/s/e47b16e174c686888228b35f0ff63c9e1e5e13d47c7f7c2e532fdeedd0981c84" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 14:00:42 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:42.853864201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:f566ecf0-6907-464c-9185-0f1cac06d38f,Namespace:default,Attempt:0,} returns sandbox id \"f472329e9fd635f4d2ecb8d02d86100f8c593bf1ea6b1e68f6aab8b27bbcb144\""
	Nov 24 14:00:42 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:42.855634629Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 14:00:45 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:45.151334885Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 14:00:45 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:45.153450408Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937184"
	Nov 24 14:00:45 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:45.156363448Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 14:00:45 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:45.161551496Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 14:00:45 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:45.162193515Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.306509548s"
	Nov 24 14:00:45 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:45.162249565Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 24 14:00:45 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:45.179990125Z" level=info msg="CreateContainer within sandbox \"f472329e9fd635f4d2ecb8d02d86100f8c593bf1ea6b1e68f6aab8b27bbcb144\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 14:00:45 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:45.248189267Z" level=info msg="Container 70e558ad037eb593fa44b07e4fd36f48454dee00712743ce51a58d742a33605b: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 14:00:45 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:45.262062616Z" level=info msg="CreateContainer within sandbox \"f472329e9fd635f4d2ecb8d02d86100f8c593bf1ea6b1e68f6aab8b27bbcb144\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"70e558ad037eb593fa44b07e4fd36f48454dee00712743ce51a58d742a33605b\""
	Nov 24 14:00:45 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:45.263385826Z" level=info msg="StartContainer for \"70e558ad037eb593fa44b07e4fd36f48454dee00712743ce51a58d742a33605b\""
	Nov 24 14:00:45 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:45.265058480Z" level=info msg="connecting to shim 70e558ad037eb593fa44b07e4fd36f48454dee00712743ce51a58d742a33605b" address="unix:///run/containerd/s/e47b16e174c686888228b35f0ff63c9e1e5e13d47c7f7c2e532fdeedd0981c84" protocol=ttrpc version=3
	Nov 24 14:00:45 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:45.370550827Z" level=info msg="StartContainer for \"70e558ad037eb593fa44b07e4fd36f48454dee00712743ce51a58d742a33605b\" returns successfully"
	Nov 24 14:00:51 old-k8s-version-318786 containerd[755]: E1124 14:00:51.571973     755 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [d1e4297a18de5a35eef1e955a0f6b73d8881ba2296e59d8acaed4614dce5de51] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60396 - 50045 "HINFO IN 8149976766644082851.319243235608499577. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.006788489s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-318786
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-318786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=old-k8s-version-318786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T14_00_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 14:00:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-318786
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:00:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:00:43 +0000   Mon, 24 Nov 2025 14:00:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:00:43 +0000   Mon, 24 Nov 2025 14:00:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:00:43 +0000   Mon, 24 Nov 2025 14:00:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 14:00:43 +0000   Mon, 24 Nov 2025 14:00:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-318786
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                259561de-786f-47f9-8e4d-12bddad03b80
	  Boot ID:                    dd480c26-e101-4930-b98c-54c06b430fdc
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-n7s8h                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     27s
	  kube-system                 etcd-old-k8s-version-318786                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         39s
	  kube-system                 kindnet-z4rkx                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-old-k8s-version-318786             250m (12%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-old-k8s-version-318786    200m (10%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-jwmdg                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-old-k8s-version-318786             100m (5%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 40s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s   kubelet          Node old-k8s-version-318786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s   kubelet          Node old-k8s-version-318786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s   kubelet          Node old-k8s-version-318786 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  39s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           27s   node-controller  Node old-k8s-version-318786 event: Registered Node old-k8s-version-318786 in Controller
	  Normal  NodeReady                13s   kubelet          Node old-k8s-version-318786 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 12:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014697] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497291] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033884] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.804993] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.476130] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [a96dcde7b48e2020162f86ef991d82171cf903dc40c2588013e878e07607a6eb] <==
	{"level":"info","ts":"2025-11-24T14:00:05.836588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-24T14:00:05.836695Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-24T14:00:05.836985Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-24T14:00:05.83715Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-24T14:00:05.837189Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-24T14:00:05.837186Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-24T14:00:05.837211Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-24T14:00:06.715956Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-24T14:00:06.716187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-24T14:00:06.716278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-11-24T14:00:06.716408Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-11-24T14:00:06.716497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-24T14:00:06.716591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-11-24T14:00:06.716663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-24T14:00:06.719119Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T14:00:06.724173Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-318786 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-24T14:00:06.727971Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T14:00:06.728194Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T14:00:06.728301Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T14:00:06.728041Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T14:00:06.732297Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-24T14:00:06.728075Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T14:00:06.73389Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-24T14:00:06.739971Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-24T14:00:06.747818Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 14:00:52 up  1:43,  0 user,  load average: 3.35, 3.70, 3.04
	Linux old-k8s-version-318786 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8a5ceb46ea7cbcd9a345bdf9ba11d0c7a3a990148842c5c44246730c76d8948d] <==
	I1124 14:00:28.769606       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:00:28.860713       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 14:00:28.860851       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:00:28.860870       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:00:28.860885       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:00:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:00:29.062756       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:00:29.064202       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:00:29.064283       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:00:29.064439       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 14:00:29.264984       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 14:00:29.265101       1 metrics.go:72] Registering metrics
	I1124 14:00:29.265206       1 controller.go:711] "Syncing nftables rules"
	I1124 14:00:39.066105       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:00:39.066164       1 main.go:301] handling current node
	I1124 14:00:49.064077       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:00:49.064224       1 main.go:301] handling current node
	
	
	==> kube-apiserver [64ea1db6adeecccf4211992b471a4088bba1825d5764c029cd41c736f16d8131] <==
	I1124 14:00:09.559574       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1124 14:00:09.559602       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1124 14:00:09.567371       1 aggregator.go:166] initial CRD sync complete...
	I1124 14:00:09.567396       1 autoregister_controller.go:141] Starting autoregister controller
	I1124 14:00:09.567404       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 14:00:09.567413       1 cache.go:39] Caches are synced for autoregister controller
	I1124 14:00:09.570195       1 controller.go:624] quota admission added evaluator for: namespaces
	E1124 14:00:09.602455       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1124 14:00:09.654324       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1124 14:00:09.818311       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 14:00:10.356017       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 14:00:10.369141       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 14:00:10.369180       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:00:11.220927       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 14:00:11.271999       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 14:00:11.406464       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 14:00:11.418391       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1124 14:00:11.420227       1 controller.go:624] quota admission added evaluator for: endpoints
	I1124 14:00:11.426883       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:00:11.578646       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1124 14:00:12.895802       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1124 14:00:12.925996       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 14:00:12.938109       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1124 14:00:25.666171       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1124 14:00:25.763116       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [d422fb0577ca71bb502e21fc4c5afd81d722a57cf4424a6d0acafef3ae4afb9a] <==
	I1124 14:00:25.810858       1 range_allocator.go:380] "Set node PodCIDR" node="old-k8s-version-318786" podCIDRs=["10.244.0.0/24"]
	I1124 14:00:25.820650       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-old-k8s-version-318786" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1124 14:00:25.832097       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-old-k8s-version-318786" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1124 14:00:25.835948       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-n7s8h"
	I1124 14:00:25.836226       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-z4rkx"
	I1124 14:00:25.844347       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-jwmdg"
	I1124 14:00:25.872136       1 shared_informer.go:318] Caches are synced for HPA
	I1124 14:00:25.873361       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-nhcwg"
	I1124 14:00:25.905108       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="206.561387ms"
	I1124 14:00:25.943326       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="37.986992ms"
	I1124 14:00:25.943650       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.392µs"
	I1124 14:00:26.225808       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 14:00:26.225842       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1124 14:00:26.240729       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 14:00:27.499329       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1124 14:00:27.521996       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-nhcwg"
	I1124 14:00:27.537841       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="38.305665ms"
	I1124 14:00:27.559719       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.827601ms"
	I1124 14:00:27.559805       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="51.644µs"
	I1124 14:00:39.122848       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.538µs"
	I1124 14:00:39.150933       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.22µs"
	I1124 14:00:40.276969       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="184.922µs"
	I1124 14:00:40.328812       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.750431ms"
	I1124 14:00:40.330201       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.312µs"
	I1124 14:00:40.747463       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [e431b25999ece5eb3499ee68f2c85868448494e4787845d9737ad20b4a20f2f8] <==
	I1124 14:00:26.865991       1 server_others.go:69] "Using iptables proxy"
	I1124 14:00:26.884883       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1124 14:00:26.934067       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:00:26.935893       1 server_others.go:152] "Using iptables Proxier"
	I1124 14:00:26.936119       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1124 14:00:26.936132       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1124 14:00:26.936170       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1124 14:00:26.936420       1 server.go:846] "Version info" version="v1.28.0"
	I1124 14:00:26.936439       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:00:26.937512       1 config.go:188] "Starting service config controller"
	I1124 14:00:26.937582       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1124 14:00:26.937602       1 config.go:97] "Starting endpoint slice config controller"
	I1124 14:00:26.937606       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1124 14:00:26.938430       1 config.go:315] "Starting node config controller"
	I1124 14:00:26.938440       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1124 14:00:27.039179       1 shared_informer.go:318] Caches are synced for node config
	I1124 14:00:27.039222       1 shared_informer.go:318] Caches are synced for service config
	I1124 14:00:27.039271       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0769df21ce83c4995a35d15a4e7ae3000b8a5d86168fda1bff6738b8943c92ef] <==
	W1124 14:00:10.860716       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1124 14:00:10.860734       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1124 14:00:10.861473       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1124 14:00:10.861503       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 14:00:10.866658       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1124 14:00:10.866694       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1124 14:00:10.866737       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1124 14:00:10.866752       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1124 14:00:10.867029       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1124 14:00:10.867053       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1124 14:00:10.867116       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1124 14:00:10.867134       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1124 14:00:10.867194       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1124 14:00:10.867211       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1124 14:00:10.867277       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1124 14:00:10.867299       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1124 14:00:10.869201       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1124 14:00:10.869232       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1124 14:00:10.869290       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1124 14:00:10.869420       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1124 14:00:10.869379       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1124 14:00:10.869453       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1124 14:00:10.870338       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1124 14:00:10.870513       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1124 14:00:11.746244       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 24 14:00:25 old-k8s-version-318786 kubelet[1527]: I1124 14:00:25.889706    1527 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 14:00:25 old-k8s-version-318786 kubelet[1527]: I1124 14:00:25.891508    1527 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 14:00:25 old-k8s-version-318786 kubelet[1527]: I1124 14:00:25.894804    1527 topology_manager.go:215] "Topology Admit Handler" podUID="11a8b197-dd22-45df-9593-66d16fdefa80" podNamespace="kube-system" podName="kube-proxy-jwmdg"
	Nov 24 14:00:25 old-k8s-version-318786 kubelet[1527]: I1124 14:00:25.914677    1527 topology_manager.go:215] "Topology Admit Handler" podUID="053d781f-846e-4391-a537-edd057019339" podNamespace="kube-system" podName="kindnet-z4rkx"
	Nov 24 14:00:26 old-k8s-version-318786 kubelet[1527]: I1124 14:00:26.018048    1527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/053d781f-846e-4391-a537-edd057019339-lib-modules\") pod \"kindnet-z4rkx\" (UID: \"053d781f-846e-4391-a537-edd057019339\") " pod="kube-system/kindnet-z4rkx"
	Nov 24 14:00:26 old-k8s-version-318786 kubelet[1527]: I1124 14:00:26.018107    1527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/11a8b197-dd22-45df-9593-66d16fdefa80-kube-proxy\") pod \"kube-proxy-jwmdg\" (UID: \"11a8b197-dd22-45df-9593-66d16fdefa80\") " pod="kube-system/kube-proxy-jwmdg"
	Nov 24 14:00:26 old-k8s-version-318786 kubelet[1527]: I1124 14:00:26.018131    1527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11a8b197-dd22-45df-9593-66d16fdefa80-lib-modules\") pod \"kube-proxy-jwmdg\" (UID: \"11a8b197-dd22-45df-9593-66d16fdefa80\") " pod="kube-system/kube-proxy-jwmdg"
	Nov 24 14:00:26 old-k8s-version-318786 kubelet[1527]: I1124 14:00:26.018158    1527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11a8b197-dd22-45df-9593-66d16fdefa80-xtables-lock\") pod \"kube-proxy-jwmdg\" (UID: \"11a8b197-dd22-45df-9593-66d16fdefa80\") " pod="kube-system/kube-proxy-jwmdg"
	Nov 24 14:00:26 old-k8s-version-318786 kubelet[1527]: I1124 14:00:26.018212    1527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wldf\" (UniqueName: \"kubernetes.io/projected/053d781f-846e-4391-a537-edd057019339-kube-api-access-2wldf\") pod \"kindnet-z4rkx\" (UID: \"053d781f-846e-4391-a537-edd057019339\") " pod="kube-system/kindnet-z4rkx"
	Nov 24 14:00:26 old-k8s-version-318786 kubelet[1527]: I1124 14:00:26.018240    1527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/053d781f-846e-4391-a537-edd057019339-cni-cfg\") pod \"kindnet-z4rkx\" (UID: \"053d781f-846e-4391-a537-edd057019339\") " pod="kube-system/kindnet-z4rkx"
	Nov 24 14:00:26 old-k8s-version-318786 kubelet[1527]: I1124 14:00:26.018265    1527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/053d781f-846e-4391-a537-edd057019339-xtables-lock\") pod \"kindnet-z4rkx\" (UID: \"053d781f-846e-4391-a537-edd057019339\") " pod="kube-system/kindnet-z4rkx"
	Nov 24 14:00:26 old-k8s-version-318786 kubelet[1527]: I1124 14:00:26.018289    1527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj98c\" (UniqueName: \"kubernetes.io/projected/11a8b197-dd22-45df-9593-66d16fdefa80-kube-api-access-zj98c\") pod \"kube-proxy-jwmdg\" (UID: \"11a8b197-dd22-45df-9593-66d16fdefa80\") " pod="kube-system/kube-proxy-jwmdg"
	Nov 24 14:00:27 old-k8s-version-318786 kubelet[1527]: I1124 14:00:27.246948    1527 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-jwmdg" podStartSLOduration=2.246903083 podCreationTimestamp="2025-11-24 14:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:00:27.246446757 +0000 UTC m=+14.385235109" watchObservedRunningTime="2025-11-24 14:00:27.246903083 +0000 UTC m=+14.385691436"
	Nov 24 14:00:33 old-k8s-version-318786 kubelet[1527]: I1124 14:00:33.074010    1527 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-z4rkx" podStartSLOduration=5.998729082 podCreationTimestamp="2025-11-24 14:00:25 +0000 UTC" firstStartedPulling="2025-11-24 14:00:26.522078288 +0000 UTC m=+13.660866641" lastFinishedPulling="2025-11-24 14:00:28.597316912 +0000 UTC m=+15.736105264" observedRunningTime="2025-11-24 14:00:29.252063076 +0000 UTC m=+16.390851428" watchObservedRunningTime="2025-11-24 14:00:33.073967705 +0000 UTC m=+20.212756058"
	Nov 24 14:00:39 old-k8s-version-318786 kubelet[1527]: I1124 14:00:39.082518    1527 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 24 14:00:39 old-k8s-version-318786 kubelet[1527]: I1124 14:00:39.121702    1527 topology_manager.go:215] "Topology Admit Handler" podUID="72202b02-1ca2-4c69-ad47-3f1ef90ba8ba" podNamespace="kube-system" podName="coredns-5dd5756b68-n7s8h"
	Nov 24 14:00:39 old-k8s-version-318786 kubelet[1527]: I1124 14:00:39.132093    1527 topology_manager.go:215] "Topology Admit Handler" podUID="2298aa73-9529-42f0-a0ec-22197acfa4ba" podNamespace="kube-system" podName="storage-provisioner"
	Nov 24 14:00:39 old-k8s-version-318786 kubelet[1527]: I1124 14:00:39.309362    1527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68nxx\" (UniqueName: \"kubernetes.io/projected/72202b02-1ca2-4c69-ad47-3f1ef90ba8ba-kube-api-access-68nxx\") pod \"coredns-5dd5756b68-n7s8h\" (UID: \"72202b02-1ca2-4c69-ad47-3f1ef90ba8ba\") " pod="kube-system/coredns-5dd5756b68-n7s8h"
	Nov 24 14:00:39 old-k8s-version-318786 kubelet[1527]: I1124 14:00:39.309430    1527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z64wd\" (UniqueName: \"kubernetes.io/projected/2298aa73-9529-42f0-a0ec-22197acfa4ba-kube-api-access-z64wd\") pod \"storage-provisioner\" (UID: \"2298aa73-9529-42f0-a0ec-22197acfa4ba\") " pod="kube-system/storage-provisioner"
	Nov 24 14:00:39 old-k8s-version-318786 kubelet[1527]: I1124 14:00:39.309458    1527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72202b02-1ca2-4c69-ad47-3f1ef90ba8ba-config-volume\") pod \"coredns-5dd5756b68-n7s8h\" (UID: \"72202b02-1ca2-4c69-ad47-3f1ef90ba8ba\") " pod="kube-system/coredns-5dd5756b68-n7s8h"
	Nov 24 14:00:39 old-k8s-version-318786 kubelet[1527]: I1124 14:00:39.309484    1527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2298aa73-9529-42f0-a0ec-22197acfa4ba-tmp\") pod \"storage-provisioner\" (UID: \"2298aa73-9529-42f0-a0ec-22197acfa4ba\") " pod="kube-system/storage-provisioner"
	Nov 24 14:00:40 old-k8s-version-318786 kubelet[1527]: I1124 14:00:40.295007    1527 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-n7s8h" podStartSLOduration=15.294930673 podCreationTimestamp="2025-11-24 14:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:00:40.27945045 +0000 UTC m=+27.418238811" watchObservedRunningTime="2025-11-24 14:00:40.294930673 +0000 UTC m=+27.433719026"
	Nov 24 14:00:40 old-k8s-version-318786 kubelet[1527]: I1124 14:00:40.313747    1527 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.313703157 podCreationTimestamp="2025-11-24 14:00:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:00:40.296336403 +0000 UTC m=+27.435124755" watchObservedRunningTime="2025-11-24 14:00:40.313703157 +0000 UTC m=+27.452491510"
	Nov 24 14:00:42 old-k8s-version-318786 kubelet[1527]: I1124 14:00:42.439571    1527 topology_manager.go:215] "Topology Admit Handler" podUID="f566ecf0-6907-464c-9185-0f1cac06d38f" podNamespace="default" podName="busybox"
	Nov 24 14:00:42 old-k8s-version-318786 kubelet[1527]: I1124 14:00:42.534626    1527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9mqp\" (UniqueName: \"kubernetes.io/projected/f566ecf0-6907-464c-9185-0f1cac06d38f-kube-api-access-t9mqp\") pod \"busybox\" (UID: \"f566ecf0-6907-464c-9185-0f1cac06d38f\") " pod="default/busybox"
	
	
	==> storage-provisioner [33ca9b6d24a80a1f0470355c5dc5bf87df622a7ffd33dad20b3a66e3d42820fb] <==
	I1124 14:00:39.762113       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 14:00:39.776081       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 14:00:39.776154       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1124 14:00:39.787120       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 14:00:39.787379       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-318786_ab0f5e48-32a3-4e29-9ee1-b1971bc22e35!
	I1124 14:00:39.788450       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fe6ba064-a6c2-4186-b355-eb48ac5eb1d0", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-318786_ab0f5e48-32a3-4e29-9ee1-b1971bc22e35 became leader
	I1124 14:00:39.888593       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-318786_ab0f5e48-32a3-4e29-9ee1-b1971bc22e35!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-318786 -n old-k8s-version-318786
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-318786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-318786
helpers_test.go:243: (dbg) docker inspect old-k8s-version-318786:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a1a9c211e03d84dc290244440868edd560e068d58cbff839724b36106b46b8b5",
	        "Created": "2025-11-24T13:59:48.707287298Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 203512,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T13:59:48.794762344Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/a1a9c211e03d84dc290244440868edd560e068d58cbff839724b36106b46b8b5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a1a9c211e03d84dc290244440868edd560e068d58cbff839724b36106b46b8b5/hostname",
	        "HostsPath": "/var/lib/docker/containers/a1a9c211e03d84dc290244440868edd560e068d58cbff839724b36106b46b8b5/hosts",
	        "LogPath": "/var/lib/docker/containers/a1a9c211e03d84dc290244440868edd560e068d58cbff839724b36106b46b8b5/a1a9c211e03d84dc290244440868edd560e068d58cbff839724b36106b46b8b5-json.log",
	        "Name": "/old-k8s-version-318786",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-318786:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-318786",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a1a9c211e03d84dc290244440868edd560e068d58cbff839724b36106b46b8b5",
	                "LowerDir": "/var/lib/docker/overlay2/6d81197f1905508bee65982ae14ace70a0ac50476483b3a6dbe6ee1b71c20126-init/diff:/var/lib/docker/overlay2/f206897dad0d7c6b66379aa7c75402ab98ba158a4fc5aedf84eda3d57da10430/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6d81197f1905508bee65982ae14ace70a0ac50476483b3a6dbe6ee1b71c20126/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6d81197f1905508bee65982ae14ace70a0ac50476483b3a6dbe6ee1b71c20126/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6d81197f1905508bee65982ae14ace70a0ac50476483b3a6dbe6ee1b71c20126/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-318786",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-318786/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-318786",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-318786",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-318786",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "afc5451f55c0addfee2faf75046d85ee1aff51cfb29d1330d1b700fc0f910363",
	            "SandboxKey": "/var/run/docker/netns/afc5451f55c0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-318786": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:35:e5:9c:e1:30",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3c8da78d6dab92e1227f095e0039dcc72885109237746924b800f0f7e07a64d9",
	                    "EndpointID": "c068219706ac0808a20d3010c587a2e59831507d8b6c4030ff3e4a62ce6b15dc",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-318786",
	                        "a1a9c211e03d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-318786 -n old-k8s-version-318786
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-318786 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-318786 logs -n 25: (1.244760502s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-803934 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                           │ cilium-803934             │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ ssh     │ -p cilium-803934 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ cilium-803934             │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ ssh     │ -p cilium-803934 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-803934             │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ ssh     │ -p cilium-803934 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-803934             │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ ssh     │ -p cilium-803934 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-803934             │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ ssh     │ -p cilium-803934 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-803934             │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ ssh     │ -p cilium-803934 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-803934             │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ ssh     │ -p cilium-803934 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-803934             │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ ssh     │ -p cilium-803934 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-803934             │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ ssh     │ -p cilium-803934 sudo containerd config dump                                                                                                                                                                                                        │ cilium-803934             │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ ssh     │ -p cilium-803934 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-803934             │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ ssh     │ -p cilium-803934 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-803934             │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ ssh     │ -p cilium-803934 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-803934             │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ ssh     │ -p cilium-803934 sudo crio config                                                                                                                                                                                                                   │ cilium-803934             │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ delete  │ -p cilium-803934                                                                                                                                                                                                                                    │ cilium-803934             │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p force-systemd-env-134839 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-134839  │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p kubernetes-upgrade-758885                                                                                                                                                                                                                        │ kubernetes-upgrade-758885 │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p cert-expiration-865605 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-865605    │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:59 UTC │
	│ ssh     │ force-systemd-env-134839 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-134839  │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p force-systemd-env-134839                                                                                                                                                                                                                         │ force-systemd-env-134839  │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ start   │ -p cert-options-440754 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-440754       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ ssh     │ cert-options-440754 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-440754       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ ssh     │ -p cert-options-440754 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-440754       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p cert-options-440754                                                                                                                                                                                                                              │ cert-options-440754       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ start   │ -p old-k8s-version-318786 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-318786    │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 14:00 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:59:42
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:59:42.406479  203121 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:59:42.406674  203121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:59:42.406701  203121 out.go:374] Setting ErrFile to fd 2...
	I1124 13:59:42.406722  203121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:59:42.407140  203121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2368/.minikube/bin
	I1124 13:59:42.407724  203121 out.go:368] Setting JSON to false
	I1124 13:59:42.409260  203121 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6131,"bootTime":1763986651,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1124 13:59:42.409372  203121 start.go:143] virtualization:  
	I1124 13:59:42.413282  203121 out.go:179] * [old-k8s-version-318786] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 13:59:42.417925  203121 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:59:42.418098  203121 notify.go:221] Checking for updates...
	I1124 13:59:42.424905  203121 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:59:42.428148  203121 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2368/kubeconfig
	I1124 13:59:42.431322  203121 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2368/.minikube
	I1124 13:59:42.434379  203121 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 13:59:42.438100  203121 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:59:42.442160  203121 config.go:182] Loaded profile config "cert-expiration-865605": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:59:42.442285  203121 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:59:42.470073  203121 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 13:59:42.470195  203121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:59:42.532782  203121 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 13:59:42.52123261 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 13:59:42.532892  203121 docker.go:319] overlay module found
	I1124 13:59:42.536185  203121 out.go:179] * Using the docker driver based on user configuration
	I1124 13:59:42.539175  203121 start.go:309] selected driver: docker
	I1124 13:59:42.539208  203121 start.go:927] validating driver "docker" against <nil>
	I1124 13:59:42.539232  203121 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:59:42.540233  203121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:59:42.601740  203121 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 13:59:42.592481576 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 13:59:42.601887  203121 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:59:42.602115  203121 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:59:42.605231  203121 out.go:179] * Using Docker driver with root privileges
	I1124 13:59:42.608204  203121 cni.go:84] Creating CNI manager for ""
	I1124 13:59:42.608281  203121 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:59:42.608296  203121 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 13:59:42.608380  203121 start.go:353] cluster config:
	{Name:old-k8s-version-318786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-318786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:59:42.611704  203121 out.go:179] * Starting "old-k8s-version-318786" primary control-plane node in "old-k8s-version-318786" cluster
	I1124 13:59:42.614615  203121 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 13:59:42.617691  203121 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 13:59:42.620619  203121 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 13:59:42.620699  203121 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1124 13:59:42.620716  203121 cache.go:65] Caching tarball of preloaded images
	I1124 13:59:42.620714  203121 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 13:59:42.620820  203121 preload.go:238] Found /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1124 13:59:42.620838  203121 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1124 13:59:42.620958  203121 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/config.json ...
	I1124 13:59:42.620983  203121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/config.json: {Name:mkdbbadabe7d23b9f104ff19d81818950111a382 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:42.640749  203121 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 13:59:42.640776  203121 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 13:59:42.640802  203121 cache.go:240] Successfully downloaded all kic artifacts
	I1124 13:59:42.640833  203121 start.go:360] acquireMachinesLock for old-k8s-version-318786: {Name:mkda208a8325231a646a1a7f876724cc4fca17ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:59:42.640958  203121 start.go:364] duration metric: took 103.057µs to acquireMachinesLock for "old-k8s-version-318786"
	I1124 13:59:42.640986  203121 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-318786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-318786 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 13:59:42.641059  203121 start.go:125] createHost starting for "" (driver="docker")
	I1124 13:59:42.644471  203121 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 13:59:42.644694  203121 start.go:159] libmachine.API.Create for "old-k8s-version-318786" (driver="docker")
	I1124 13:59:42.644747  203121 client.go:173] LocalClient.Create starting
	I1124 13:59:42.644827  203121 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem
	I1124 13:59:42.644867  203121 main.go:143] libmachine: Decoding PEM data...
	I1124 13:59:42.644888  203121 main.go:143] libmachine: Parsing certificate...
	I1124 13:59:42.644949  203121 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem
	I1124 13:59:42.644971  203121 main.go:143] libmachine: Decoding PEM data...
	I1124 13:59:42.644986  203121 main.go:143] libmachine: Parsing certificate...
	I1124 13:59:42.645338  203121 cli_runner.go:164] Run: docker network inspect old-k8s-version-318786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 13:59:42.661505  203121 cli_runner.go:211] docker network inspect old-k8s-version-318786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 13:59:42.661596  203121 network_create.go:284] running [docker network inspect old-k8s-version-318786] to gather additional debugging logs...
	I1124 13:59:42.661615  203121 cli_runner.go:164] Run: docker network inspect old-k8s-version-318786
	W1124 13:59:42.677608  203121 cli_runner.go:211] docker network inspect old-k8s-version-318786 returned with exit code 1
	I1124 13:59:42.677643  203121 network_create.go:287] error running [docker network inspect old-k8s-version-318786]: docker network inspect old-k8s-version-318786: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-318786 not found
	I1124 13:59:42.677659  203121 network_create.go:289] output of [docker network inspect old-k8s-version-318786]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-318786 not found
	
	** /stderr **
	I1124 13:59:42.677758  203121 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:59:42.694925  203121 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e5e15b13860d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:3d:37:c4:cc:77} reservation:<nil>}
	I1124 13:59:42.695253  203121 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-66593a990bce IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:c0:9b:bc:41:ca} reservation:<nil>}
	I1124 13:59:42.695642  203121 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-37e9fb0954cf IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:32:0b:6f:6e:b2:8c} reservation:<nil>}
	I1124 13:59:42.695904  203121 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-5977b32dc412 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:82:75:42:7c:e9:e6} reservation:<nil>}
	I1124 13:59:42.696411  203121 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019bcfe0}
	I1124 13:59:42.696437  203121 network_create.go:124] attempt to create docker network old-k8s-version-318786 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1124 13:59:42.696498  203121 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-318786 old-k8s-version-318786
	I1124 13:59:42.754268  203121 network_create.go:108] docker network old-k8s-version-318786 192.168.85.0/24 created
	I1124 13:59:42.754297  203121 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-318786" container
	I1124 13:59:42.754382  203121 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 13:59:42.771474  203121 cli_runner.go:164] Run: docker volume create old-k8s-version-318786 --label name.minikube.sigs.k8s.io=old-k8s-version-318786 --label created_by.minikube.sigs.k8s.io=true
	I1124 13:59:42.789916  203121 oci.go:103] Successfully created a docker volume old-k8s-version-318786
	I1124 13:59:42.790028  203121 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-318786-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-318786 --entrypoint /usr/bin/test -v old-k8s-version-318786:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 13:59:43.376934  203121 oci.go:107] Successfully prepared a docker volume old-k8s-version-318786
	I1124 13:59:43.377002  203121 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 13:59:43.377014  203121 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 13:59:43.377093  203121 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-318786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 13:59:48.629782  203121 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-318786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.252643456s)
	I1124 13:59:48.629823  203121 kic.go:203] duration metric: took 5.252805903s to extract preloaded images to volume ...
	W1124 13:59:48.629966  203121 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 13:59:48.630073  203121 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 13:59:48.692534  203121 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-318786 --name old-k8s-version-318786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-318786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-318786 --network old-k8s-version-318786 --ip 192.168.85.2 --volume old-k8s-version-318786:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 13:59:49.023181  203121 cli_runner.go:164] Run: docker container inspect old-k8s-version-318786 --format={{.State.Running}}
	I1124 13:59:49.046529  203121 cli_runner.go:164] Run: docker container inspect old-k8s-version-318786 --format={{.State.Status}}
	I1124 13:59:49.073693  203121 cli_runner.go:164] Run: docker exec old-k8s-version-318786 stat /var/lib/dpkg/alternatives/iptables
	I1124 13:59:49.143680  203121 oci.go:144] the created container "old-k8s-version-318786" has a running status.
	I1124 13:59:49.143714  203121 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-2368/.minikube/machines/old-k8s-version-318786/id_rsa...
	I1124 13:59:49.471341  203121 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-2368/.minikube/machines/old-k8s-version-318786/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 13:59:49.501921  203121 cli_runner.go:164] Run: docker container inspect old-k8s-version-318786 --format={{.State.Status}}
	I1124 13:59:49.532238  203121 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 13:59:49.532267  203121 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-318786 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 13:59:49.607023  203121 cli_runner.go:164] Run: docker container inspect old-k8s-version-318786 --format={{.State.Status}}
	I1124 13:59:49.637450  203121 machine.go:94] provisionDockerMachine start ...
	I1124 13:59:49.637558  203121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-318786
	I1124 13:59:49.663172  203121 main.go:143] libmachine: Using SSH client type: native
	I1124 13:59:49.663576  203121 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1124 13:59:49.663586  203121 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 13:59:49.666892  203121 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 13:59:52.819647  203121 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-318786
	
	I1124 13:59:52.819716  203121 ubuntu.go:182] provisioning hostname "old-k8s-version-318786"
	I1124 13:59:52.819805  203121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-318786
	I1124 13:59:52.837381  203121 main.go:143] libmachine: Using SSH client type: native
	I1124 13:59:52.837693  203121 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1124 13:59:52.837710  203121 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-318786 && echo "old-k8s-version-318786" | sudo tee /etc/hostname
	I1124 13:59:53.001525  203121 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-318786
	
	I1124 13:59:53.001631  203121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-318786
	I1124 13:59:53.024082  203121 main.go:143] libmachine: Using SSH client type: native
	I1124 13:59:53.024554  203121 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1124 13:59:53.024610  203121 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-318786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-318786/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-318786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 13:59:53.180483  203121 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 13:59:53.180555  203121 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-2368/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-2368/.minikube}
	I1124 13:59:53.180601  203121 ubuntu.go:190] setting up certificates
	I1124 13:59:53.180641  203121 provision.go:84] configureAuth start
	I1124 13:59:53.180754  203121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-318786
	I1124 13:59:53.197870  203121 provision.go:143] copyHostCerts
	I1124 13:59:53.197937  203121 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem, removing ...
	I1124 13:59:53.197947  203121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem
	I1124 13:59:53.198026  203121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem (1679 bytes)
	I1124 13:59:53.198115  203121 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem, removing ...
	I1124 13:59:53.198120  203121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem
	I1124 13:59:53.198145  203121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem (1082 bytes)
	I1124 13:59:53.198195  203121 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem, removing ...
	I1124 13:59:53.198199  203121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem
	I1124 13:59:53.198221  203121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem (1123 bytes)
	I1124 13:59:53.198264  203121 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-318786 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-318786]
	I1124 13:59:53.447750  203121 provision.go:177] copyRemoteCerts
	I1124 13:59:53.447821  203121 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 13:59:53.447859  203121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-318786
	I1124 13:59:53.466989  203121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/old-k8s-version-318786/id_rsa Username:docker}
	I1124 13:59:53.573838  203121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 13:59:53.593131  203121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1124 13:59:53.614562  203121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 13:59:53.634677  203121 provision.go:87] duration metric: took 453.994052ms to configureAuth
	I1124 13:59:53.634716  203121 ubuntu.go:206] setting minikube options for container-runtime
	I1124 13:59:53.634894  203121 config.go:182] Loaded profile config "old-k8s-version-318786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 13:59:53.634916  203121 machine.go:97] duration metric: took 3.997446316s to provisionDockerMachine
	I1124 13:59:53.634923  203121 client.go:176] duration metric: took 10.990163165s to LocalClient.Create
	I1124 13:59:53.634942  203121 start.go:167] duration metric: took 10.990248318s to libmachine.API.Create "old-k8s-version-318786"
	I1124 13:59:53.634951  203121 start.go:293] postStartSetup for "old-k8s-version-318786" (driver="docker")
	I1124 13:59:53.634967  203121 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 13:59:53.635028  203121 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 13:59:53.635072  203121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-318786
	I1124 13:59:53.651615  203121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/old-k8s-version-318786/id_rsa Username:docker}
	I1124 13:59:53.760351  203121 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 13:59:53.763787  203121 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 13:59:53.763818  203121 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 13:59:53.763831  203121 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2368/.minikube/addons for local assets ...
	I1124 13:59:53.763886  203121 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2368/.minikube/files for local assets ...
	I1124 13:59:53.764002  203121 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem -> 41782.pem in /etc/ssl/certs
	I1124 13:59:53.764116  203121 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 13:59:53.771607  203121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem --> /etc/ssl/certs/41782.pem (1708 bytes)
	I1124 13:59:53.790229  203121 start.go:296] duration metric: took 155.256983ms for postStartSetup
	I1124 13:59:53.790653  203121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-318786
	I1124 13:59:53.807439  203121 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/config.json ...
	I1124 13:59:53.807757  203121 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:59:53.807816  203121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-318786
	I1124 13:59:53.825527  203121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/old-k8s-version-318786/id_rsa Username:docker}
	I1124 13:59:53.928742  203121 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 13:59:53.933408  203121 start.go:128] duration metric: took 11.29232535s to createHost
	I1124 13:59:53.933433  203121 start.go:83] releasing machines lock for "old-k8s-version-318786", held for 11.292464025s
	I1124 13:59:53.933507  203121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-318786
	I1124 13:59:53.950335  203121 ssh_runner.go:195] Run: cat /version.json
	I1124 13:59:53.950395  203121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-318786
	I1124 13:59:53.950688  203121 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 13:59:53.950748  203121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-318786
	I1124 13:59:53.969960  203121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/old-k8s-version-318786/id_rsa Username:docker}
	I1124 13:59:53.970283  203121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/old-k8s-version-318786/id_rsa Username:docker}
	I1124 13:59:54.187220  203121 ssh_runner.go:195] Run: systemctl --version
	I1124 13:59:54.193763  203121 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 13:59:54.197792  203121 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 13:59:54.197862  203121 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 13:59:54.225219  203121 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 13:59:54.225241  203121 start.go:496] detecting cgroup driver to use...
	I1124 13:59:54.225273  203121 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 13:59:54.225319  203121 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 13:59:54.240905  203121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 13:59:54.255129  203121 docker.go:218] disabling cri-docker service (if available) ...
	I1124 13:59:54.255221  203121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 13:59:54.274287  203121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 13:59:54.293183  203121 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 13:59:54.421827  203121 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 13:59:54.546597  203121 docker.go:234] disabling docker service ...
	I1124 13:59:54.546687  203121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 13:59:54.569497  203121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 13:59:54.583215  203121 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 13:59:54.700724  203121 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 13:59:54.819165  203121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 13:59:54.832231  203121 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 13:59:54.851866  203121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1124 13:59:54.862178  203121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 13:59:54.871620  203121 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1124 13:59:54.871738  203121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1124 13:59:54.882231  203121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 13:59:54.891717  203121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 13:59:54.901467  203121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 13:59:54.910294  203121 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 13:59:54.918660  203121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 13:59:54.927868  203121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 13:59:54.937082  203121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 13:59:54.946216  203121 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 13:59:54.954056  203121 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 13:59:54.961958  203121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:59:55.097492  203121 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 13:59:55.230526  203121 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 13:59:55.230649  203121 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 13:59:55.234998  203121 start.go:564] Will wait 60s for crictl version
	I1124 13:59:55.235132  203121 ssh_runner.go:195] Run: which crictl
	I1124 13:59:55.238882  203121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 13:59:55.268214  203121 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 13:59:55.268356  203121 ssh_runner.go:195] Run: containerd --version
	I1124 13:59:55.288303  203121 ssh_runner.go:195] Run: containerd --version
	I1124 13:59:55.314523  203121 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1124 13:59:55.317381  203121 cli_runner.go:164] Run: docker network inspect old-k8s-version-318786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:59:55.334289  203121 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 13:59:55.338412  203121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:59:55.348875  203121 kubeadm.go:884] updating cluster {Name:old-k8s-version-318786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-318786 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 13:59:55.349007  203121 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 13:59:55.349078  203121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:59:55.373604  203121 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 13:59:55.373629  203121 containerd.go:534] Images already preloaded, skipping extraction
	I1124 13:59:55.373693  203121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:59:55.398685  203121 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 13:59:55.398711  203121 cache_images.go:86] Images are preloaded, skipping loading
	I1124 13:59:55.398719  203121 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 containerd true true} ...
	I1124 13:59:55.398825  203121 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-318786 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-318786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 13:59:55.398898  203121 ssh_runner.go:195] Run: sudo crictl info
	I1124 13:59:55.429304  203121 cni.go:84] Creating CNI manager for ""
	I1124 13:59:55.429328  203121 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:59:55.429372  203121 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 13:59:55.429403  203121 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-318786 NodeName:old-k8s-version-318786 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 13:59:55.429550  203121 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-318786"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 13:59:55.429622  203121 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1124 13:59:55.437772  203121 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 13:59:55.437895  203121 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 13:59:55.445856  203121 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1124 13:59:55.459167  203121 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 13:59:55.473519  203121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I1124 13:59:55.487760  203121 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 13:59:55.491722  203121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:59:55.502994  203121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:59:55.625341  203121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:59:55.647018  203121 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786 for IP: 192.168.85.2
	I1124 13:59:55.647099  203121 certs.go:195] generating shared ca certs ...
	I1124 13:59:55.647130  203121 certs.go:227] acquiring lock for ca certs: {Name:mkcd8707c782acde0e57168c044a3df942dc4ab1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:55.647322  203121 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-2368/.minikube/ca.key
	I1124 13:59:55.647396  203121 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.key
	I1124 13:59:55.647432  203121 certs.go:257] generating profile certs ...
	I1124 13:59:55.647513  203121 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/client.key
	I1124 13:59:55.647551  203121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/client.crt with IP's: []
	I1124 13:59:56.033129  203121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/client.crt ...
	I1124 13:59:56.033212  203121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/client.crt: {Name:mk69bb915606644e0645060fa46449dd65f83095 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:56.033449  203121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/client.key ...
	I1124 13:59:56.033488  203121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/client.key: {Name:mkfa34a7c8b2d69c736fc1cfd2304ae49133ac4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:56.033640  203121 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/apiserver.key.251f69ae
	I1124 13:59:56.033684  203121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/apiserver.crt.251f69ae with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 13:59:56.281567  203121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/apiserver.crt.251f69ae ...
	I1124 13:59:56.281598  203121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/apiserver.crt.251f69ae: {Name:mk572ed713bf0eec1d0b840d076729a08786aff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:56.281810  203121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/apiserver.key.251f69ae ...
	I1124 13:59:56.281825  203121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/apiserver.key.251f69ae: {Name:mk6b112bdb309b7ed87e7e056627f1c30ccc769a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:56.281918  203121 certs.go:382] copying /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/apiserver.crt.251f69ae -> /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/apiserver.crt
	I1124 13:59:56.281996  203121 certs.go:386] copying /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/apiserver.key.251f69ae -> /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/apiserver.key
	I1124 13:59:56.282057  203121 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/proxy-client.key
	I1124 13:59:56.282077  203121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/proxy-client.crt with IP's: []
	I1124 13:59:56.404952  203121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/proxy-client.crt ...
	I1124 13:59:56.404984  203121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/proxy-client.crt: {Name:mk506f5bcd13da36d0e32b27db8471ef560cbc9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:56.405167  203121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/proxy-client.key ...
	I1124 13:59:56.405182  203121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/proxy-client.key: {Name:mk63bb1c02064c41d85f1d8bf24cb0b4a26d687a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:56.405366  203121 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178.pem (1338 bytes)
	W1124 13:59:56.405416  203121 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178_empty.pem, impossibly tiny 0 bytes
	I1124 13:59:56.405425  203121 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 13:59:56.405454  203121 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem (1082 bytes)
	I1124 13:59:56.405487  203121 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem (1123 bytes)
	I1124 13:59:56.405517  203121 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem (1679 bytes)
	I1124 13:59:56.405566  203121 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem (1708 bytes)
	I1124 13:59:56.406138  203121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 13:59:56.425371  203121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 13:59:56.445638  203121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 13:59:56.465263  203121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 13:59:56.484806  203121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1124 13:59:56.503229  203121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 13:59:56.526839  203121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 13:59:56.546328  203121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 13:59:56.568330  203121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178.pem --> /usr/share/ca-certificates/4178.pem (1338 bytes)
	I1124 13:59:56.588914  203121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem --> /usr/share/ca-certificates/41782.pem (1708 bytes)
	I1124 13:59:56.609069  203121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 13:59:56.635519  203121 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 13:59:56.649021  203121 ssh_runner.go:195] Run: openssl version
	I1124 13:59:56.655362  203121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41782.pem && ln -fs /usr/share/ca-certificates/41782.pem /etc/ssl/certs/41782.pem"
	I1124 13:59:56.664078  203121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41782.pem
	I1124 13:59:56.667939  203121 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:21 /usr/share/ca-certificates/41782.pem
	I1124 13:59:56.668018  203121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41782.pem
	I1124 13:59:56.709276  203121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 13:59:56.717713  203121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 13:59:56.725687  203121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:59:56.729416  203121 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:59:56.729511  203121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:59:56.771028  203121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 13:59:56.779345  203121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4178.pem && ln -fs /usr/share/ca-certificates/4178.pem /etc/ssl/certs/4178.pem"
	I1124 13:59:56.787738  203121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4178.pem
	I1124 13:59:56.792183  203121 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:21 /usr/share/ca-certificates/4178.pem
	I1124 13:59:56.792289  203121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4178.pem
	I1124 13:59:56.833374  203121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4178.pem /etc/ssl/certs/51391683.0"
	I1124 13:59:56.841910  203121 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 13:59:56.845538  203121 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 13:59:56.845595  203121 kubeadm.go:401] StartCluster: {Name:old-k8s-version-318786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-318786 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:59:56.845673  203121 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 13:59:56.845734  203121 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:59:56.874154  203121 cri.go:89] found id: ""
	I1124 13:59:56.874225  203121 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 13:59:56.882169  203121 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 13:59:56.890196  203121 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 13:59:56.890264  203121 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 13:59:56.898559  203121 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 13:59:56.898579  203121 kubeadm.go:158] found existing configuration files:
	
	I1124 13:59:56.898629  203121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 13:59:56.906476  203121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 13:59:56.906616  203121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 13:59:56.914551  203121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 13:59:56.922673  203121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 13:59:56.922748  203121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 13:59:56.931103  203121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 13:59:56.939465  203121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 13:59:56.939567  203121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 13:59:56.947086  203121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 13:59:56.955210  203121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 13:59:56.955302  203121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 13:59:56.963184  203121 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 13:59:57.020344  203121 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1124 13:59:57.020647  203121 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 13:59:57.059971  203121 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 13:59:57.060049  203121 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 13:59:57.060090  203121 kubeadm.go:319] OS: Linux
	I1124 13:59:57.060146  203121 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 13:59:57.060199  203121 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 13:59:57.060249  203121 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 13:59:57.060302  203121 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 13:59:57.060354  203121 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 13:59:57.060407  203121 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 13:59:57.060457  203121 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 13:59:57.060509  203121 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 13:59:57.060558  203121 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 13:59:57.153578  203121 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 13:59:57.153733  203121 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 13:59:57.153905  203121 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1124 13:59:57.330900  203121 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 13:59:57.336703  203121 out.go:252]   - Generating certificates and keys ...
	I1124 13:59:57.336796  203121 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 13:59:57.336870  203121 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 13:59:57.865889  203121 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 13:59:58.185353  203121 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 13:59:59.130735  203121 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 13:59:59.642294  203121 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 13:59:59.906079  203121 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 13:59:59.906451  203121 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-318786] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 14:00:00.123407  203121 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 14:00:00.123551  203121 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-318786] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 14:00:00.270762  203121 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 14:00:01.217860  203121 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 14:00:01.724986  203121 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 14:00:01.740375  203121 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 14:00:02.910438  203121 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 14:00:03.183161  203121 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 14:00:03.857453  203121 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 14:00:04.272263  203121 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 14:00:04.273275  203121 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 14:00:04.276092  203121 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 14:00:04.279552  203121 out.go:252]   - Booting up control plane ...
	I1124 14:00:04.279655  203121 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 14:00:04.279733  203121 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 14:00:04.279800  203121 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 14:00:04.298211  203121 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 14:00:04.298994  203121 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 14:00:04.299292  203121 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 14:00:04.444292  203121 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1124 14:00:11.451050  203121 kubeadm.go:319] [apiclient] All control plane components are healthy after 7.006841 seconds
	I1124 14:00:11.451179  203121 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 14:00:11.470632  203121 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 14:00:12.039593  203121 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 14:00:12.039804  203121 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-318786 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 14:00:12.553048  203121 kubeadm.go:319] [bootstrap-token] Using token: lzgex3.uugtb4pr04721m2a
	I1124 14:00:12.555996  203121 out.go:252]   - Configuring RBAC rules ...
	I1124 14:00:12.556126  203121 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 14:00:12.561765  203121 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 14:00:12.571340  203121 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 14:00:12.578855  203121 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 14:00:12.583307  203121 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 14:00:12.587409  203121 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 14:00:12.604367  203121 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 14:00:12.927512  203121 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 14:00:12.998711  203121 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 14:00:13.009044  203121 kubeadm.go:319] 
	I1124 14:00:13.009136  203121 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 14:00:13.009150  203121 kubeadm.go:319] 
	I1124 14:00:13.009228  203121 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 14:00:13.009237  203121 kubeadm.go:319] 
	I1124 14:00:13.009262  203121 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 14:00:13.009867  203121 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 14:00:13.009932  203121 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 14:00:13.009946  203121 kubeadm.go:319] 
	I1124 14:00:13.010001  203121 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 14:00:13.010013  203121 kubeadm.go:319] 
	I1124 14:00:13.010061  203121 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 14:00:13.010069  203121 kubeadm.go:319] 
	I1124 14:00:13.010122  203121 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 14:00:13.010202  203121 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 14:00:13.010274  203121 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 14:00:13.010281  203121 kubeadm.go:319] 
	I1124 14:00:13.010670  203121 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 14:00:13.010761  203121 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 14:00:13.010771  203121 kubeadm.go:319] 
	I1124 14:00:13.011083  203121 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token lzgex3.uugtb4pr04721m2a \
	I1124 14:00:13.011197  203121 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aa948289582a95f47bab77808ca51e5d74f41a914fe1740ab9448815f8011aac \
	I1124 14:00:13.011480  203121 kubeadm.go:319] 	--control-plane 
	I1124 14:00:13.011502  203121 kubeadm.go:319] 
	I1124 14:00:13.011780  203121 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 14:00:13.011795  203121 kubeadm.go:319] 
	I1124 14:00:13.012105  203121 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token lzgex3.uugtb4pr04721m2a \
	I1124 14:00:13.012432  203121 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aa948289582a95f47bab77808ca51e5d74f41a914fe1740ab9448815f8011aac 
	I1124 14:00:13.016246  203121 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 14:00:13.016372  203121 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 14:00:13.016396  203121 cni.go:84] Creating CNI manager for ""
	I1124 14:00:13.016409  203121 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 14:00:13.019688  203121 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 14:00:13.022683  203121 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 14:00:13.034888  203121 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1124 14:00:13.034906  203121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 14:00:13.059514  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 14:00:14.290955  203121 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.231358047s)
	I1124 14:00:14.291008  203121 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 14:00:14.291124  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:14.291189  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-318786 minikube.k8s.io/updated_at=2025_11_24T14_00_14_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=old-k8s-version-318786 minikube.k8s.io/primary=true
	I1124 14:00:14.451653  203121 ops.go:34] apiserver oom_adj: -16
	I1124 14:00:14.451772  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:14.952612  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:15.452444  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:15.952508  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:16.452482  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:16.952838  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:17.452425  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:17.951984  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:18.452384  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:18.952884  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:19.452844  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:19.951825  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:20.452041  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:20.954241  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:21.452323  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:21.952432  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:22.451804  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:22.951865  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:23.452374  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:23.952376  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:24.452544  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:24.952573  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:25.451889  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:25.951879  203121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:00:26.060844  203121 kubeadm.go:1114] duration metric: took 11.769763814s to wait for elevateKubeSystemPrivileges
	I1124 14:00:26.060873  203121 kubeadm.go:403] duration metric: took 29.215284106s to StartCluster
	I1124 14:00:26.060891  203121 settings.go:142] acquiring lock: {Name:mk2b0bbff4d8ced468f457362668d43b813dc062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:26.060955  203121 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-2368/kubeconfig
	I1124 14:00:26.061937  203121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/kubeconfig: {Name:mk246d21eaffbd8aca2abdc1b2f89d6fcc902f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:26.062157  203121 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 14:00:26.062320  203121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 14:00:26.062598  203121 config.go:182] Loaded profile config "old-k8s-version-318786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 14:00:26.062635  203121 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:00:26.062693  203121 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-318786"
	I1124 14:00:26.062708  203121 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-318786"
	I1124 14:00:26.062728  203121 host.go:66] Checking if "old-k8s-version-318786" exists ...
	I1124 14:00:26.063138  203121 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-318786"
	I1124 14:00:26.063163  203121 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-318786"
	I1124 14:00:26.063454  203121 cli_runner.go:164] Run: docker container inspect old-k8s-version-318786 --format={{.State.Status}}
	I1124 14:00:26.063514  203121 cli_runner.go:164] Run: docker container inspect old-k8s-version-318786 --format={{.State.Status}}
	I1124 14:00:26.066058  203121 out.go:179] * Verifying Kubernetes components...
	I1124 14:00:26.069103  203121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:00:26.111201  203121 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 14:00:26.116253  203121 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-318786"
	I1124 14:00:26.116292  203121 host.go:66] Checking if "old-k8s-version-318786" exists ...
	I1124 14:00:26.116709  203121 cli_runner.go:164] Run: docker container inspect old-k8s-version-318786 --format={{.State.Status}}
	I1124 14:00:26.116830  203121 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:00:26.116844  203121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:00:26.116892  203121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-318786
	I1124 14:00:26.150658  203121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/old-k8s-version-318786/id_rsa Username:docker}
	I1124 14:00:26.161222  203121 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 14:00:26.161243  203121 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 14:00:26.161315  203121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-318786
	I1124 14:00:26.189630  203121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/old-k8s-version-318786/id_rsa Username:docker}
	I1124 14:00:26.424488  203121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 14:00:26.425624  203121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:00:26.485066  203121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:00:26.513639  203121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 14:00:27.429647  203121 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.003904202s)
	I1124 14:00:27.430570  203121 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-318786" to be "Ready" ...
	I1124 14:00:27.431403  203121 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.005751061s)
	I1124 14:00:27.431468  203121 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 14:00:27.809398  203121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.324299578s)
	I1124 14:00:27.809491  203121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.295828834s)
	I1124 14:00:27.819279  203121 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 14:00:27.822260  203121 addons.go:530] duration metric: took 1.759614941s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 14:00:27.936206  203121 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-318786" context rescaled to 1 replicas
	W1124 14:00:29.434630  203121 node_ready.go:57] node "old-k8s-version-318786" has "Ready":"False" status (will retry)
	W1124 14:00:31.933623  203121 node_ready.go:57] node "old-k8s-version-318786" has "Ready":"False" status (will retry)
	W1124 14:00:33.934195  203121 node_ready.go:57] node "old-k8s-version-318786" has "Ready":"False" status (will retry)
	W1124 14:00:36.434086  203121 node_ready.go:57] node "old-k8s-version-318786" has "Ready":"False" status (will retry)
	W1124 14:00:38.434475  203121 node_ready.go:57] node "old-k8s-version-318786" has "Ready":"False" status (will retry)
	I1124 14:00:39.437064  203121 node_ready.go:49] node "old-k8s-version-318786" is "Ready"
	I1124 14:00:39.437091  203121 node_ready.go:38] duration metric: took 12.006466784s for node "old-k8s-version-318786" to be "Ready" ...
	I1124 14:00:39.437104  203121 api_server.go:52] waiting for apiserver process to appear ...
	I1124 14:00:39.437165  203121 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:00:39.453227  203121 api_server.go:72] duration metric: took 13.391041621s to wait for apiserver process to appear ...
	I1124 14:00:39.453251  203121 api_server.go:88] waiting for apiserver healthz status ...
	I1124 14:00:39.453271  203121 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 14:00:39.462068  203121 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 14:00:39.463530  203121 api_server.go:141] control plane version: v1.28.0
	I1124 14:00:39.463554  203121 api_server.go:131] duration metric: took 10.295662ms to wait for apiserver health ...
	I1124 14:00:39.463563  203121 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 14:00:39.467352  203121 system_pods.go:59] 8 kube-system pods found
	I1124 14:00:39.467391  203121 system_pods.go:61] "coredns-5dd5756b68-n7s8h" [72202b02-1ca2-4c69-ad47-3f1ef90ba8ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:00:39.467397  203121 system_pods.go:61] "etcd-old-k8s-version-318786" [dd78fff3-f901-4dc0-9c77-907dbb69e36d] Running
	I1124 14:00:39.467402  203121 system_pods.go:61] "kindnet-z4rkx" [053d781f-846e-4391-a537-edd057019339] Running
	I1124 14:00:39.467406  203121 system_pods.go:61] "kube-apiserver-old-k8s-version-318786" [7f0596ec-97f5-4a70-974d-38c5d9a51273] Running
	I1124 14:00:39.467410  203121 system_pods.go:61] "kube-controller-manager-old-k8s-version-318786" [4ae0e32a-b5f2-4e37-82d1-d76bfabbedd5] Running
	I1124 14:00:39.467414  203121 system_pods.go:61] "kube-proxy-jwmdg" [11a8b197-dd22-45df-9593-66d16fdefa80] Running
	I1124 14:00:39.467418  203121 system_pods.go:61] "kube-scheduler-old-k8s-version-318786" [01641e80-7a9e-48c2-b9e3-d384beab62d7] Running
	I1124 14:00:39.467423  203121 system_pods.go:61] "storage-provisioner" [2298aa73-9529-42f0-a0ec-22197acfa4ba] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:00:39.467428  203121 system_pods.go:74] duration metric: took 3.859916ms to wait for pod list to return data ...
	I1124 14:00:39.467435  203121 default_sa.go:34] waiting for default service account to be created ...
	I1124 14:00:39.470040  203121 default_sa.go:45] found service account: "default"
	I1124 14:00:39.470060  203121 default_sa.go:55] duration metric: took 2.619768ms for default service account to be created ...
	I1124 14:00:39.470070  203121 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 14:00:39.473490  203121 system_pods.go:86] 8 kube-system pods found
	I1124 14:00:39.473522  203121 system_pods.go:89] "coredns-5dd5756b68-n7s8h" [72202b02-1ca2-4c69-ad47-3f1ef90ba8ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:00:39.473528  203121 system_pods.go:89] "etcd-old-k8s-version-318786" [dd78fff3-f901-4dc0-9c77-907dbb69e36d] Running
	I1124 14:00:39.473534  203121 system_pods.go:89] "kindnet-z4rkx" [053d781f-846e-4391-a537-edd057019339] Running
	I1124 14:00:39.473539  203121 system_pods.go:89] "kube-apiserver-old-k8s-version-318786" [7f0596ec-97f5-4a70-974d-38c5d9a51273] Running
	I1124 14:00:39.473543  203121 system_pods.go:89] "kube-controller-manager-old-k8s-version-318786" [4ae0e32a-b5f2-4e37-82d1-d76bfabbedd5] Running
	I1124 14:00:39.473547  203121 system_pods.go:89] "kube-proxy-jwmdg" [11a8b197-dd22-45df-9593-66d16fdefa80] Running
	I1124 14:00:39.473552  203121 system_pods.go:89] "kube-scheduler-old-k8s-version-318786" [01641e80-7a9e-48c2-b9e3-d384beab62d7] Running
	I1124 14:00:39.473558  203121 system_pods.go:89] "storage-provisioner" [2298aa73-9529-42f0-a0ec-22197acfa4ba] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:00:39.473585  203121 retry.go:31] will retry after 299.487693ms: missing components: kube-dns
	I1124 14:00:39.780995  203121 system_pods.go:86] 8 kube-system pods found
	I1124 14:00:39.781029  203121 system_pods.go:89] "coredns-5dd5756b68-n7s8h" [72202b02-1ca2-4c69-ad47-3f1ef90ba8ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:00:39.781036  203121 system_pods.go:89] "etcd-old-k8s-version-318786" [dd78fff3-f901-4dc0-9c77-907dbb69e36d] Running
	I1124 14:00:39.781043  203121 system_pods.go:89] "kindnet-z4rkx" [053d781f-846e-4391-a537-edd057019339] Running
	I1124 14:00:39.781047  203121 system_pods.go:89] "kube-apiserver-old-k8s-version-318786" [7f0596ec-97f5-4a70-974d-38c5d9a51273] Running
	I1124 14:00:39.781051  203121 system_pods.go:89] "kube-controller-manager-old-k8s-version-318786" [4ae0e32a-b5f2-4e37-82d1-d76bfabbedd5] Running
	I1124 14:00:39.781055  203121 system_pods.go:89] "kube-proxy-jwmdg" [11a8b197-dd22-45df-9593-66d16fdefa80] Running
	I1124 14:00:39.781061  203121 system_pods.go:89] "kube-scheduler-old-k8s-version-318786" [01641e80-7a9e-48c2-b9e3-d384beab62d7] Running
	I1124 14:00:39.781067  203121 system_pods.go:89] "storage-provisioner" [2298aa73-9529-42f0-a0ec-22197acfa4ba] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:00:39.781080  203121 retry.go:31] will retry after 249.97776ms: missing components: kube-dns
	I1124 14:00:40.063092  203121 system_pods.go:86] 8 kube-system pods found
	I1124 14:00:40.063130  203121 system_pods.go:89] "coredns-5dd5756b68-n7s8h" [72202b02-1ca2-4c69-ad47-3f1ef90ba8ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:00:40.063139  203121 system_pods.go:89] "etcd-old-k8s-version-318786" [dd78fff3-f901-4dc0-9c77-907dbb69e36d] Running
	I1124 14:00:40.063145  203121 system_pods.go:89] "kindnet-z4rkx" [053d781f-846e-4391-a537-edd057019339] Running
	I1124 14:00:40.063149  203121 system_pods.go:89] "kube-apiserver-old-k8s-version-318786" [7f0596ec-97f5-4a70-974d-38c5d9a51273] Running
	I1124 14:00:40.063180  203121 system_pods.go:89] "kube-controller-manager-old-k8s-version-318786" [4ae0e32a-b5f2-4e37-82d1-d76bfabbedd5] Running
	I1124 14:00:40.063193  203121 system_pods.go:89] "kube-proxy-jwmdg" [11a8b197-dd22-45df-9593-66d16fdefa80] Running
	I1124 14:00:40.063198  203121 system_pods.go:89] "kube-scheduler-old-k8s-version-318786" [01641e80-7a9e-48c2-b9e3-d384beab62d7] Running
	I1124 14:00:40.063203  203121 system_pods.go:89] "storage-provisioner" [2298aa73-9529-42f0-a0ec-22197acfa4ba] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:00:40.063219  203121 retry.go:31] will retry after 436.268576ms: missing components: kube-dns
	I1124 14:00:40.504166  203121 system_pods.go:86] 8 kube-system pods found
	I1124 14:00:40.504245  203121 system_pods.go:89] "coredns-5dd5756b68-n7s8h" [72202b02-1ca2-4c69-ad47-3f1ef90ba8ba] Running
	I1124 14:00:40.504259  203121 system_pods.go:89] "etcd-old-k8s-version-318786" [dd78fff3-f901-4dc0-9c77-907dbb69e36d] Running
	I1124 14:00:40.504264  203121 system_pods.go:89] "kindnet-z4rkx" [053d781f-846e-4391-a537-edd057019339] Running
	I1124 14:00:40.504269  203121 system_pods.go:89] "kube-apiserver-old-k8s-version-318786" [7f0596ec-97f5-4a70-974d-38c5d9a51273] Running
	I1124 14:00:40.504274  203121 system_pods.go:89] "kube-controller-manager-old-k8s-version-318786" [4ae0e32a-b5f2-4e37-82d1-d76bfabbedd5] Running
	I1124 14:00:40.504279  203121 system_pods.go:89] "kube-proxy-jwmdg" [11a8b197-dd22-45df-9593-66d16fdefa80] Running
	I1124 14:00:40.504283  203121 system_pods.go:89] "kube-scheduler-old-k8s-version-318786" [01641e80-7a9e-48c2-b9e3-d384beab62d7] Running
	I1124 14:00:40.504287  203121 system_pods.go:89] "storage-provisioner" [2298aa73-9529-42f0-a0ec-22197acfa4ba] Running
	I1124 14:00:40.504296  203121 system_pods.go:126] duration metric: took 1.034219513s to wait for k8s-apps to be running ...
	I1124 14:00:40.504307  203121 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 14:00:40.504364  203121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:00:40.517880  203121 system_svc.go:56] duration metric: took 13.563315ms WaitForService to wait for kubelet
	I1124 14:00:40.517964  203121 kubeadm.go:587] duration metric: took 14.455781279s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:00:40.517991  203121 node_conditions.go:102] verifying NodePressure condition ...
	I1124 14:00:40.520930  203121 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 14:00:40.520963  203121 node_conditions.go:123] node cpu capacity is 2
	I1124 14:00:40.520978  203121 node_conditions.go:105] duration metric: took 2.980003ms to run NodePressure ...
	I1124 14:00:40.520990  203121 start.go:242] waiting for startup goroutines ...
	I1124 14:00:40.520998  203121 start.go:247] waiting for cluster config update ...
	I1124 14:00:40.521010  203121 start.go:256] writing updated cluster config ...
	I1124 14:00:40.521298  203121 ssh_runner.go:195] Run: rm -f paused
	I1124 14:00:40.525324  203121 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:00:40.529797  203121 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-n7s8h" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:00:40.535607  203121 pod_ready.go:94] pod "coredns-5dd5756b68-n7s8h" is "Ready"
	I1124 14:00:40.535639  203121 pod_ready.go:86] duration metric: took 5.816258ms for pod "coredns-5dd5756b68-n7s8h" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:00:40.539181  203121 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-318786" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:00:40.544657  203121 pod_ready.go:94] pod "etcd-old-k8s-version-318786" is "Ready"
	I1124 14:00:40.544685  203121 pod_ready.go:86] duration metric: took 5.478924ms for pod "etcd-old-k8s-version-318786" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:00:40.548165  203121 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-318786" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:00:40.553506  203121 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-318786" is "Ready"
	I1124 14:00:40.553538  203121 pod_ready.go:86] duration metric: took 5.343284ms for pod "kube-apiserver-old-k8s-version-318786" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:00:40.556924  203121 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-318786" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:00:40.929692  203121 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-318786" is "Ready"
	I1124 14:00:40.929725  203121 pod_ready.go:86] duration metric: took 372.7723ms for pod "kube-controller-manager-old-k8s-version-318786" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:00:41.130990  203121 pod_ready.go:83] waiting for pod "kube-proxy-jwmdg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:00:41.530005  203121 pod_ready.go:94] pod "kube-proxy-jwmdg" is "Ready"
	I1124 14:00:41.530034  203121 pod_ready.go:86] duration metric: took 399.016962ms for pod "kube-proxy-jwmdg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:00:41.730026  203121 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-318786" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:00:42.131071  203121 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-318786" is "Ready"
	I1124 14:00:42.131114  203121 pod_ready.go:86] duration metric: took 401.061008ms for pod "kube-scheduler-old-k8s-version-318786" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:00:42.131129  203121 pod_ready.go:40] duration metric: took 1.60575817s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:00:42.200914  203121 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1124 14:00:42.204172  203121 out.go:203] 
	W1124 14:00:42.207213  203121 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 14:00:42.210285  203121 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 14:00:42.214390  203121 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-318786" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	70e558ad037eb       1611cd07b61d5       9 seconds ago       Running             busybox                   0                   f472329e9fd63       busybox                                          default
	33ca9b6d24a80       ba04bb24b9575       15 seconds ago      Running             storage-provisioner       0                   9de766e43deb4       storage-provisioner                              kube-system
	d1e4297a18de5       97e04611ad434       15 seconds ago      Running             coredns                   0                   1b5cb0ca09af2       coredns-5dd5756b68-n7s8h                         kube-system
	8a5ceb46ea7cb       b1a8c6f707935       26 seconds ago      Running             kindnet-cni               0                   e9b4fd516b97c       kindnet-z4rkx                                    kube-system
	e431b25999ece       940f54a5bcae9       28 seconds ago      Running             kube-proxy                0                   68384e9c54fe8       kube-proxy-jwmdg                                 kube-system
	64ea1db6adeec       00543d2fe5d71       49 seconds ago      Running             kube-apiserver            0                   6e10952c6964b       kube-apiserver-old-k8s-version-318786            kube-system
	d422fb0577ca7       46cc66ccc7c19       49 seconds ago      Running             kube-controller-manager   0                   ede8e07dcdc74       kube-controller-manager-old-k8s-version-318786   kube-system
	0769df21ce83c       762dce4090c5f       49 seconds ago      Running             kube-scheduler            0                   6729e51d9cdf6       kube-scheduler-old-k8s-version-318786            kube-system
	a96dcde7b48e2       9cdd6470f48c8       49 seconds ago      Running             etcd                      0                   388ca052bc258       etcd-old-k8s-version-318786                      kube-system
	
	
	==> containerd <==
	Nov 24 14:00:39 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:39.610323957Z" level=info msg="connecting to shim d1e4297a18de5a35eef1e955a0f6b73d8881ba2296e59d8acaed4614dce5de51" address="unix:///run/containerd/s/f62f275e67577be37030e893196dc98d73b2044e58d241d1a7f99ccee4904d24" protocol=ttrpc version=3
	Nov 24 14:00:39 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:39.620434869Z" level=info msg="CreateContainer within sandbox \"9de766e43deb416449962bc7301bab891c72b0af9fb329bb4d8e4ff8ef66bff4\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Nov 24 14:00:39 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:39.644253353Z" level=info msg="Container 33ca9b6d24a80a1f0470355c5dc5bf87df622a7ffd33dad20b3a66e3d42820fb: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 14:00:39 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:39.663968611Z" level=info msg="CreateContainer within sandbox \"9de766e43deb416449962bc7301bab891c72b0af9fb329bb4d8e4ff8ef66bff4\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"33ca9b6d24a80a1f0470355c5dc5bf87df622a7ffd33dad20b3a66e3d42820fb\""
	Nov 24 14:00:39 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:39.665502745Z" level=info msg="StartContainer for \"33ca9b6d24a80a1f0470355c5dc5bf87df622a7ffd33dad20b3a66e3d42820fb\""
	Nov 24 14:00:39 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:39.666531739Z" level=info msg="connecting to shim 33ca9b6d24a80a1f0470355c5dc5bf87df622a7ffd33dad20b3a66e3d42820fb" address="unix:///run/containerd/s/25a7b18f3f0941131e8c32d45d1f9f3bcee38bf8a73b1e3195d36d7532fce44f" protocol=ttrpc version=3
	Nov 24 14:00:39 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:39.711141089Z" level=info msg="StartContainer for \"d1e4297a18de5a35eef1e955a0f6b73d8881ba2296e59d8acaed4614dce5de51\" returns successfully"
	Nov 24 14:00:39 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:39.756705480Z" level=info msg="StartContainer for \"33ca9b6d24a80a1f0470355c5dc5bf87df622a7ffd33dad20b3a66e3d42820fb\" returns successfully"
	Nov 24 14:00:42 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:42.744539553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:f566ecf0-6907-464c-9185-0f1cac06d38f,Namespace:default,Attempt:0,}"
	Nov 24 14:00:42 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:42.796555926Z" level=info msg="connecting to shim f472329e9fd635f4d2ecb8d02d86100f8c593bf1ea6b1e68f6aab8b27bbcb144" address="unix:///run/containerd/s/e47b16e174c686888228b35f0ff63c9e1e5e13d47c7f7c2e532fdeedd0981c84" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 14:00:42 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:42.853864201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:f566ecf0-6907-464c-9185-0f1cac06d38f,Namespace:default,Attempt:0,} returns sandbox id \"f472329e9fd635f4d2ecb8d02d86100f8c593bf1ea6b1e68f6aab8b27bbcb144\""
	Nov 24 14:00:42 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:42.855634629Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 14:00:45 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:45.151334885Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 14:00:45 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:45.153450408Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937184"
	Nov 24 14:00:45 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:45.156363448Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 14:00:45 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:45.161551496Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 14:00:45 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:45.162193515Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.306509548s"
	Nov 24 14:00:45 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:45.162249565Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 24 14:00:45 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:45.179990125Z" level=info msg="CreateContainer within sandbox \"f472329e9fd635f4d2ecb8d02d86100f8c593bf1ea6b1e68f6aab8b27bbcb144\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 14:00:45 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:45.248189267Z" level=info msg="Container 70e558ad037eb593fa44b07e4fd36f48454dee00712743ce51a58d742a33605b: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 14:00:45 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:45.262062616Z" level=info msg="CreateContainer within sandbox \"f472329e9fd635f4d2ecb8d02d86100f8c593bf1ea6b1e68f6aab8b27bbcb144\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"70e558ad037eb593fa44b07e4fd36f48454dee00712743ce51a58d742a33605b\""
	Nov 24 14:00:45 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:45.263385826Z" level=info msg="StartContainer for \"70e558ad037eb593fa44b07e4fd36f48454dee00712743ce51a58d742a33605b\""
	Nov 24 14:00:45 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:45.265058480Z" level=info msg="connecting to shim 70e558ad037eb593fa44b07e4fd36f48454dee00712743ce51a58d742a33605b" address="unix:///run/containerd/s/e47b16e174c686888228b35f0ff63c9e1e5e13d47c7f7c2e532fdeedd0981c84" protocol=ttrpc version=3
	Nov 24 14:00:45 old-k8s-version-318786 containerd[755]: time="2025-11-24T14:00:45.370550827Z" level=info msg="StartContainer for \"70e558ad037eb593fa44b07e4fd36f48454dee00712743ce51a58d742a33605b\" returns successfully"
	Nov 24 14:00:51 old-k8s-version-318786 containerd[755]: E1124 14:00:51.571973     755 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [d1e4297a18de5a35eef1e955a0f6b73d8881ba2296e59d8acaed4614dce5de51] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60396 - 50045 "HINFO IN 8149976766644082851.319243235608499577. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.006788489s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-318786
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-318786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=old-k8s-version-318786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T14_00_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 14:00:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-318786
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:00:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:00:43 +0000   Mon, 24 Nov 2025 14:00:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:00:43 +0000   Mon, 24 Nov 2025 14:00:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:00:43 +0000   Mon, 24 Nov 2025 14:00:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 14:00:43 +0000   Mon, 24 Nov 2025 14:00:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-318786
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                259561de-786f-47f9-8e4d-12bddad03b80
	  Boot ID:                    dd480c26-e101-4930-b98c-54c06b430fdc
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-5dd5756b68-n7s8h                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     30s
	  kube-system                 etcd-old-k8s-version-318786                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         42s
	  kube-system                 kindnet-z4rkx                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-old-k8s-version-318786             250m (12%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-controller-manager-old-k8s-version-318786    200m (10%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-proxy-jwmdg                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-old-k8s-version-318786             100m (5%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 28s   kube-proxy       
	  Normal  Starting                 43s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  42s   kubelet          Node old-k8s-version-318786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s   kubelet          Node old-k8s-version-318786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s   kubelet          Node old-k8s-version-318786 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  42s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           30s   node-controller  Node old-k8s-version-318786 event: Registered Node old-k8s-version-318786 in Controller
	  Normal  NodeReady                16s   kubelet          Node old-k8s-version-318786 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 12:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014697] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497291] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033884] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.804993] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.476130] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [a96dcde7b48e2020162f86ef991d82171cf903dc40c2588013e878e07607a6eb] <==
	{"level":"info","ts":"2025-11-24T14:00:05.836588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-24T14:00:05.836695Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-24T14:00:05.836985Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-24T14:00:05.83715Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-24T14:00:05.837189Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-24T14:00:05.837186Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-24T14:00:05.837211Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-24T14:00:06.715956Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-24T14:00:06.716187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-24T14:00:06.716278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-11-24T14:00:06.716408Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-11-24T14:00:06.716497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-24T14:00:06.716591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-11-24T14:00:06.716663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-24T14:00:06.719119Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T14:00:06.724173Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-318786 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-24T14:00:06.727971Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T14:00:06.728194Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T14:00:06.728301Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T14:00:06.728041Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T14:00:06.732297Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-24T14:00:06.728075Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T14:00:06.73389Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-24T14:00:06.739971Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-24T14:00:06.747818Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 14:00:55 up  1:43,  0 user,  load average: 3.35, 3.70, 3.04
	Linux old-k8s-version-318786 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8a5ceb46ea7cbcd9a345bdf9ba11d0c7a3a990148842c5c44246730c76d8948d] <==
	I1124 14:00:28.769606       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:00:28.860713       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 14:00:28.860851       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:00:28.860870       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:00:28.860885       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:00:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:00:29.062756       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:00:29.064202       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:00:29.064283       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:00:29.064439       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 14:00:29.264984       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 14:00:29.265101       1 metrics.go:72] Registering metrics
	I1124 14:00:29.265206       1 controller.go:711] "Syncing nftables rules"
	I1124 14:00:39.066105       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:00:39.066164       1 main.go:301] handling current node
	I1124 14:00:49.064077       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:00:49.064224       1 main.go:301] handling current node
	
	
	==> kube-apiserver [64ea1db6adeecccf4211992b471a4088bba1825d5764c029cd41c736f16d8131] <==
	I1124 14:00:09.559574       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1124 14:00:09.559602       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1124 14:00:09.567371       1 aggregator.go:166] initial CRD sync complete...
	I1124 14:00:09.567396       1 autoregister_controller.go:141] Starting autoregister controller
	I1124 14:00:09.567404       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 14:00:09.567413       1 cache.go:39] Caches are synced for autoregister controller
	I1124 14:00:09.570195       1 controller.go:624] quota admission added evaluator for: namespaces
	E1124 14:00:09.602455       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1124 14:00:09.654324       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1124 14:00:09.818311       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 14:00:10.356017       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 14:00:10.369141       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 14:00:10.369180       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:00:11.220927       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 14:00:11.271999       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 14:00:11.406464       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 14:00:11.418391       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1124 14:00:11.420227       1 controller.go:624] quota admission added evaluator for: endpoints
	I1124 14:00:11.426883       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:00:11.578646       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1124 14:00:12.895802       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1124 14:00:12.925996       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 14:00:12.938109       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1124 14:00:25.666171       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1124 14:00:25.763116       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [d422fb0577ca71bb502e21fc4c5afd81d722a57cf4424a6d0acafef3ae4afb9a] <==
	I1124 14:00:25.810858       1 range_allocator.go:380] "Set node PodCIDR" node="old-k8s-version-318786" podCIDRs=["10.244.0.0/24"]
	I1124 14:00:25.820650       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-old-k8s-version-318786" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1124 14:00:25.832097       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-old-k8s-version-318786" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1124 14:00:25.835948       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-n7s8h"
	I1124 14:00:25.836226       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-z4rkx"
	I1124 14:00:25.844347       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-jwmdg"
	I1124 14:00:25.872136       1 shared_informer.go:318] Caches are synced for HPA
	I1124 14:00:25.873361       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-nhcwg"
	I1124 14:00:25.905108       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="206.561387ms"
	I1124 14:00:25.943326       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="37.986992ms"
	I1124 14:00:25.943650       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.392µs"
	I1124 14:00:26.225808       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 14:00:26.225842       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1124 14:00:26.240729       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 14:00:27.499329       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1124 14:00:27.521996       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-nhcwg"
	I1124 14:00:27.537841       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="38.305665ms"
	I1124 14:00:27.559719       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.827601ms"
	I1124 14:00:27.559805       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="51.644µs"
	I1124 14:00:39.122848       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.538µs"
	I1124 14:00:39.150933       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.22µs"
	I1124 14:00:40.276969       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="184.922µs"
	I1124 14:00:40.328812       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.750431ms"
	I1124 14:00:40.330201       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.312µs"
	I1124 14:00:40.747463       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [e431b25999ece5eb3499ee68f2c85868448494e4787845d9737ad20b4a20f2f8] <==
	I1124 14:00:26.865991       1 server_others.go:69] "Using iptables proxy"
	I1124 14:00:26.884883       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1124 14:00:26.934067       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:00:26.935893       1 server_others.go:152] "Using iptables Proxier"
	I1124 14:00:26.936119       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1124 14:00:26.936132       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1124 14:00:26.936170       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1124 14:00:26.936420       1 server.go:846] "Version info" version="v1.28.0"
	I1124 14:00:26.936439       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:00:26.937512       1 config.go:188] "Starting service config controller"
	I1124 14:00:26.937582       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1124 14:00:26.937602       1 config.go:97] "Starting endpoint slice config controller"
	I1124 14:00:26.937606       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1124 14:00:26.938430       1 config.go:315] "Starting node config controller"
	I1124 14:00:26.938440       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1124 14:00:27.039179       1 shared_informer.go:318] Caches are synced for node config
	I1124 14:00:27.039222       1 shared_informer.go:318] Caches are synced for service config
	I1124 14:00:27.039271       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0769df21ce83c4995a35d15a4e7ae3000b8a5d86168fda1bff6738b8943c92ef] <==
	W1124 14:00:10.860716       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1124 14:00:10.860734       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1124 14:00:10.861473       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1124 14:00:10.861503       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 14:00:10.866658       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1124 14:00:10.866694       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1124 14:00:10.866737       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1124 14:00:10.866752       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1124 14:00:10.867029       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1124 14:00:10.867053       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1124 14:00:10.867116       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1124 14:00:10.867134       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1124 14:00:10.867194       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1124 14:00:10.867211       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1124 14:00:10.867277       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1124 14:00:10.867299       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1124 14:00:10.869201       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1124 14:00:10.869232       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1124 14:00:10.869290       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1124 14:00:10.869420       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1124 14:00:10.869379       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1124 14:00:10.869453       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1124 14:00:10.870338       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1124 14:00:10.870513       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1124 14:00:11.746244       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 24 14:00:25 old-k8s-version-318786 kubelet[1527]: I1124 14:00:25.889706    1527 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 14:00:25 old-k8s-version-318786 kubelet[1527]: I1124 14:00:25.891508    1527 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 14:00:25 old-k8s-version-318786 kubelet[1527]: I1124 14:00:25.894804    1527 topology_manager.go:215] "Topology Admit Handler" podUID="11a8b197-dd22-45df-9593-66d16fdefa80" podNamespace="kube-system" podName="kube-proxy-jwmdg"
	Nov 24 14:00:25 old-k8s-version-318786 kubelet[1527]: I1124 14:00:25.914677    1527 topology_manager.go:215] "Topology Admit Handler" podUID="053d781f-846e-4391-a537-edd057019339" podNamespace="kube-system" podName="kindnet-z4rkx"
	Nov 24 14:00:26 old-k8s-version-318786 kubelet[1527]: I1124 14:00:26.018048    1527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/053d781f-846e-4391-a537-edd057019339-lib-modules\") pod \"kindnet-z4rkx\" (UID: \"053d781f-846e-4391-a537-edd057019339\") " pod="kube-system/kindnet-z4rkx"
	Nov 24 14:00:26 old-k8s-version-318786 kubelet[1527]: I1124 14:00:26.018107    1527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/11a8b197-dd22-45df-9593-66d16fdefa80-kube-proxy\") pod \"kube-proxy-jwmdg\" (UID: \"11a8b197-dd22-45df-9593-66d16fdefa80\") " pod="kube-system/kube-proxy-jwmdg"
	Nov 24 14:00:26 old-k8s-version-318786 kubelet[1527]: I1124 14:00:26.018131    1527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11a8b197-dd22-45df-9593-66d16fdefa80-lib-modules\") pod \"kube-proxy-jwmdg\" (UID: \"11a8b197-dd22-45df-9593-66d16fdefa80\") " pod="kube-system/kube-proxy-jwmdg"
	Nov 24 14:00:26 old-k8s-version-318786 kubelet[1527]: I1124 14:00:26.018158    1527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11a8b197-dd22-45df-9593-66d16fdefa80-xtables-lock\") pod \"kube-proxy-jwmdg\" (UID: \"11a8b197-dd22-45df-9593-66d16fdefa80\") " pod="kube-system/kube-proxy-jwmdg"
	Nov 24 14:00:26 old-k8s-version-318786 kubelet[1527]: I1124 14:00:26.018212    1527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wldf\" (UniqueName: \"kubernetes.io/projected/053d781f-846e-4391-a537-edd057019339-kube-api-access-2wldf\") pod \"kindnet-z4rkx\" (UID: \"053d781f-846e-4391-a537-edd057019339\") " pod="kube-system/kindnet-z4rkx"
	Nov 24 14:00:26 old-k8s-version-318786 kubelet[1527]: I1124 14:00:26.018240    1527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/053d781f-846e-4391-a537-edd057019339-cni-cfg\") pod \"kindnet-z4rkx\" (UID: \"053d781f-846e-4391-a537-edd057019339\") " pod="kube-system/kindnet-z4rkx"
	Nov 24 14:00:26 old-k8s-version-318786 kubelet[1527]: I1124 14:00:26.018265    1527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/053d781f-846e-4391-a537-edd057019339-xtables-lock\") pod \"kindnet-z4rkx\" (UID: \"053d781f-846e-4391-a537-edd057019339\") " pod="kube-system/kindnet-z4rkx"
	Nov 24 14:00:26 old-k8s-version-318786 kubelet[1527]: I1124 14:00:26.018289    1527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj98c\" (UniqueName: \"kubernetes.io/projected/11a8b197-dd22-45df-9593-66d16fdefa80-kube-api-access-zj98c\") pod \"kube-proxy-jwmdg\" (UID: \"11a8b197-dd22-45df-9593-66d16fdefa80\") " pod="kube-system/kube-proxy-jwmdg"
	Nov 24 14:00:27 old-k8s-version-318786 kubelet[1527]: I1124 14:00:27.246948    1527 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-jwmdg" podStartSLOduration=2.246903083 podCreationTimestamp="2025-11-24 14:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:00:27.246446757 +0000 UTC m=+14.385235109" watchObservedRunningTime="2025-11-24 14:00:27.246903083 +0000 UTC m=+14.385691436"
	Nov 24 14:00:33 old-k8s-version-318786 kubelet[1527]: I1124 14:00:33.074010    1527 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-z4rkx" podStartSLOduration=5.998729082 podCreationTimestamp="2025-11-24 14:00:25 +0000 UTC" firstStartedPulling="2025-11-24 14:00:26.522078288 +0000 UTC m=+13.660866641" lastFinishedPulling="2025-11-24 14:00:28.597316912 +0000 UTC m=+15.736105264" observedRunningTime="2025-11-24 14:00:29.252063076 +0000 UTC m=+16.390851428" watchObservedRunningTime="2025-11-24 14:00:33.073967705 +0000 UTC m=+20.212756058"
	Nov 24 14:00:39 old-k8s-version-318786 kubelet[1527]: I1124 14:00:39.082518    1527 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 24 14:00:39 old-k8s-version-318786 kubelet[1527]: I1124 14:00:39.121702    1527 topology_manager.go:215] "Topology Admit Handler" podUID="72202b02-1ca2-4c69-ad47-3f1ef90ba8ba" podNamespace="kube-system" podName="coredns-5dd5756b68-n7s8h"
	Nov 24 14:00:39 old-k8s-version-318786 kubelet[1527]: I1124 14:00:39.132093    1527 topology_manager.go:215] "Topology Admit Handler" podUID="2298aa73-9529-42f0-a0ec-22197acfa4ba" podNamespace="kube-system" podName="storage-provisioner"
	Nov 24 14:00:39 old-k8s-version-318786 kubelet[1527]: I1124 14:00:39.309362    1527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68nxx\" (UniqueName: \"kubernetes.io/projected/72202b02-1ca2-4c69-ad47-3f1ef90ba8ba-kube-api-access-68nxx\") pod \"coredns-5dd5756b68-n7s8h\" (UID: \"72202b02-1ca2-4c69-ad47-3f1ef90ba8ba\") " pod="kube-system/coredns-5dd5756b68-n7s8h"
	Nov 24 14:00:39 old-k8s-version-318786 kubelet[1527]: I1124 14:00:39.309430    1527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z64wd\" (UniqueName: \"kubernetes.io/projected/2298aa73-9529-42f0-a0ec-22197acfa4ba-kube-api-access-z64wd\") pod \"storage-provisioner\" (UID: \"2298aa73-9529-42f0-a0ec-22197acfa4ba\") " pod="kube-system/storage-provisioner"
	Nov 24 14:00:39 old-k8s-version-318786 kubelet[1527]: I1124 14:00:39.309458    1527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72202b02-1ca2-4c69-ad47-3f1ef90ba8ba-config-volume\") pod \"coredns-5dd5756b68-n7s8h\" (UID: \"72202b02-1ca2-4c69-ad47-3f1ef90ba8ba\") " pod="kube-system/coredns-5dd5756b68-n7s8h"
	Nov 24 14:00:39 old-k8s-version-318786 kubelet[1527]: I1124 14:00:39.309484    1527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2298aa73-9529-42f0-a0ec-22197acfa4ba-tmp\") pod \"storage-provisioner\" (UID: \"2298aa73-9529-42f0-a0ec-22197acfa4ba\") " pod="kube-system/storage-provisioner"
	Nov 24 14:00:40 old-k8s-version-318786 kubelet[1527]: I1124 14:00:40.295007    1527 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-n7s8h" podStartSLOduration=15.294930673 podCreationTimestamp="2025-11-24 14:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:00:40.27945045 +0000 UTC m=+27.418238811" watchObservedRunningTime="2025-11-24 14:00:40.294930673 +0000 UTC m=+27.433719026"
	Nov 24 14:00:40 old-k8s-version-318786 kubelet[1527]: I1124 14:00:40.313747    1527 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.313703157 podCreationTimestamp="2025-11-24 14:00:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:00:40.296336403 +0000 UTC m=+27.435124755" watchObservedRunningTime="2025-11-24 14:00:40.313703157 +0000 UTC m=+27.452491510"
	Nov 24 14:00:42 old-k8s-version-318786 kubelet[1527]: I1124 14:00:42.439571    1527 topology_manager.go:215] "Topology Admit Handler" podUID="f566ecf0-6907-464c-9185-0f1cac06d38f" podNamespace="default" podName="busybox"
	Nov 24 14:00:42 old-k8s-version-318786 kubelet[1527]: I1124 14:00:42.534626    1527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9mqp\" (UniqueName: \"kubernetes.io/projected/f566ecf0-6907-464c-9185-0f1cac06d38f-kube-api-access-t9mqp\") pod \"busybox\" (UID: \"f566ecf0-6907-464c-9185-0f1cac06d38f\") " pod="default/busybox"
	
	
	==> storage-provisioner [33ca9b6d24a80a1f0470355c5dc5bf87df622a7ffd33dad20b3a66e3d42820fb] <==
	I1124 14:00:39.762113       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 14:00:39.776081       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 14:00:39.776154       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1124 14:00:39.787120       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 14:00:39.787379       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-318786_ab0f5e48-32a3-4e29-9ee1-b1971bc22e35!
	I1124 14:00:39.788450       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fe6ba064-a6c2-4186-b355-eb48ac5eb1d0", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-318786_ab0f5e48-32a3-4e29-9ee1-b1971bc22e35 became leader
	I1124 14:00:39.888593       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-318786_ab0f5e48-32a3-4e29-9ee1-b1971bc22e35!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-318786 -n old-k8s-version-318786
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-318786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (13.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (13.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-609438 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ad098064-4a27-4674-9c05-03b1e253a816] Pending
helpers_test.go:352: "busybox" [ad098064-4a27-4674-9c05-03b1e253a816] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ad098064-4a27-4674-9c05-03b1e253a816] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.006934069s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-609438 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-609438
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-609438:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e60e4efae158c5a5a7453ef33e59ec253543fb035754fa5f4e30943f9ec7969a",
	        "Created": "2025-11-24T14:02:23.924453268Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213017,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T14:02:24.041059545Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/e60e4efae158c5a5a7453ef33e59ec253543fb035754fa5f4e30943f9ec7969a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e60e4efae158c5a5a7453ef33e59ec253543fb035754fa5f4e30943f9ec7969a/hostname",
	        "HostsPath": "/var/lib/docker/containers/e60e4efae158c5a5a7453ef33e59ec253543fb035754fa5f4e30943f9ec7969a/hosts",
	        "LogPath": "/var/lib/docker/containers/e60e4efae158c5a5a7453ef33e59ec253543fb035754fa5f4e30943f9ec7969a/e60e4efae158c5a5a7453ef33e59ec253543fb035754fa5f4e30943f9ec7969a-json.log",
	        "Name": "/default-k8s-diff-port-609438",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-609438:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-609438",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e60e4efae158c5a5a7453ef33e59ec253543fb035754fa5f4e30943f9ec7969a",
	                "LowerDir": "/var/lib/docker/overlay2/af912c60810958b2495a8e05e93b587823eb87ecd651998279108ce95e60bdd1-init/diff:/var/lib/docker/overlay2/f206897dad0d7c6b66379aa7c75402ab98ba158a4fc5aedf84eda3d57da10430/diff",
	                "MergedDir": "/var/lib/docker/overlay2/af912c60810958b2495a8e05e93b587823eb87ecd651998279108ce95e60bdd1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/af912c60810958b2495a8e05e93b587823eb87ecd651998279108ce95e60bdd1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/af912c60810958b2495a8e05e93b587823eb87ecd651998279108ce95e60bdd1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-609438",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-609438/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-609438",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-609438",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-609438",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "92b3fe73ad5646614d6e8497cac5042fe28f99f96de535116de434d264224cc1",
	            "SandboxKey": "/var/run/docker/netns/92b3fe73ad56",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-609438": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:c0:c4:cd:26:9c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "18f87d422e57d01218d717420ae39221feb8c7f5806d615eefa583d8581f96bf",
	                    "EndpointID": "ece204dd69efe63eb7de38db0e784591e44a1308f3abc699a3f72a5774f87abc",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-609438",
	                        "e60e4efae158"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-609438 -n default-k8s-diff-port-609438
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-609438 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-609438 logs -n 25: (1.27806318s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ ssh     │ -p cilium-803934 sudo crio config                                                                                                                                                                                                                   │ cilium-803934                │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ delete  │ -p cilium-803934                                                                                                                                                                                                                                    │ cilium-803934                │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p force-systemd-env-134839 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-134839     │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p kubernetes-upgrade-758885                                                                                                                                                                                                                        │ kubernetes-upgrade-758885    │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p cert-expiration-865605 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-865605       │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:59 UTC │
	│ ssh     │ force-systemd-env-134839 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-134839     │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p force-systemd-env-134839                                                                                                                                                                                                                         │ force-systemd-env-134839     │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ start   │ -p cert-options-440754 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-440754          │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ ssh     │ cert-options-440754 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-440754          │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ ssh     │ -p cert-options-440754 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-440754          │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p cert-options-440754                                                                                                                                                                                                                              │ cert-options-440754          │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ start   │ -p old-k8s-version-318786 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 14:00 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-318786 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │ 24 Nov 25 14:00 UTC │
	│ stop    │ -p old-k8s-version-318786 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │ 24 Nov 25 14:01 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-318786 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ start   │ -p old-k8s-version-318786 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ image   │ old-k8s-version-318786 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:02 UTC │
	│ pause   │ -p old-k8s-version-318786 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:02 UTC │
	│ start   │ -p cert-expiration-865605 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-865605       │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:02 UTC │
	│ unpause │ -p old-k8s-version-318786 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:02 UTC │
	│ delete  │ -p old-k8s-version-318786                                                                                                                                                                                                                           │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:02 UTC │
	│ delete  │ -p old-k8s-version-318786                                                                                                                                                                                                                           │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:02 UTC │
	│ start   │ -p default-k8s-diff-port-609438 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-609438 │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:03 UTC │
	│ delete  │ -p cert-expiration-865605                                                                                                                                                                                                                           │ cert-expiration-865605       │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:02 UTC │
	│ start   │ -p embed-certs-593634 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-593634           │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:02:25
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:02:25.355768  213570 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:02:25.355897  213570 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:02:25.355929  213570 out.go:374] Setting ErrFile to fd 2...
	I1124 14:02:25.355935  213570 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:02:25.356214  213570 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2368/.minikube/bin
	I1124 14:02:25.356610  213570 out.go:368] Setting JSON to false
	I1124 14:02:25.357458  213570 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6294,"bootTime":1763986651,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1124 14:02:25.357531  213570 start.go:143] virtualization:  
	I1124 14:02:25.363130  213570 out.go:179] * [embed-certs-593634] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 14:02:25.366080  213570 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:02:25.366317  213570 notify.go:221] Checking for updates...
	I1124 14:02:25.371678  213570 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:02:25.374517  213570 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2368/kubeconfig
	I1124 14:02:25.377392  213570 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2368/.minikube
	I1124 14:02:25.380291  213570 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 14:02:25.383233  213570 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:02:25.386803  213570 config.go:182] Loaded profile config "default-k8s-diff-port-609438": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 14:02:25.386988  213570 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:02:25.428466  213570 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 14:02:25.428628  213570 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:02:25.551573  213570 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:55 SystemTime:2025-11-24 14:02:25.537516273 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:02:25.551683  213570 docker.go:319] overlay module found
	I1124 14:02:25.556682  213570 out.go:179] * Using the docker driver based on user configuration
	I1124 14:02:25.559709  213570 start.go:309] selected driver: docker
	I1124 14:02:25.559726  213570 start.go:927] validating driver "docker" against <nil>
	I1124 14:02:25.559738  213570 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:02:25.560805  213570 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:02:25.668193  213570 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-24 14:02:25.655788801 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:02:25.668344  213570 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 14:02:25.668552  213570 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:02:25.671717  213570 out.go:179] * Using Docker driver with root privileges
	I1124 14:02:25.674536  213570 cni.go:84] Creating CNI manager for ""
	I1124 14:02:25.674610  213570 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 14:02:25.674621  213570 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 14:02:25.674693  213570 start.go:353] cluster config:
	{Name:embed-certs-593634 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-593634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:02:25.677759  213570 out.go:179] * Starting "embed-certs-593634" primary control-plane node in "embed-certs-593634" cluster
	I1124 14:02:25.680596  213570 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 14:02:25.683549  213570 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 14:02:25.686518  213570 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 14:02:25.686579  213570 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1124 14:02:25.686594  213570 cache.go:65] Caching tarball of preloaded images
	I1124 14:02:25.686679  213570 preload.go:238] Found /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1124 14:02:25.686689  213570 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1124 14:02:25.686792  213570 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/config.json ...
	I1124 14:02:25.686808  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/config.json: {Name:mkcf0b417a9473ceb4b66956bfa520a43f4ebbeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:25.686945  213570 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 14:02:25.710900  213570 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 14:02:25.710919  213570 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 14:02:25.710933  213570 cache.go:240] Successfully downloaded all kic artifacts
	I1124 14:02:25.710962  213570 start.go:360] acquireMachinesLock for embed-certs-593634: {Name:mk435fa1f228450b1765e3435053e751c40a1834 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:02:25.711053  213570 start.go:364] duration metric: took 77.449µs to acquireMachinesLock for "embed-certs-593634"
	I1124 14:02:25.711077  213570 start.go:93] Provisioning new machine with config: &{Name:embed-certs-593634 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-593634 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 14:02:25.711153  213570 start.go:125] createHost starting for "" (driver="docker")
	I1124 14:02:23.909747  212383 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-609438 --name default-k8s-diff-port-609438 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-609438 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-609438 --network default-k8s-diff-port-609438 --ip 192.168.85.2 --volume default-k8s-diff-port-609438:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 14:02:24.307279  212383 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-609438 --format={{.State.Running}}
	I1124 14:02:24.327311  212383 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-609438 --format={{.State.Status}}
	I1124 14:02:24.369313  212383 cli_runner.go:164] Run: docker exec default-k8s-diff-port-609438 stat /var/lib/dpkg/alternatives/iptables
	I1124 14:02:24.459655  212383 oci.go:144] the created container "default-k8s-diff-port-609438" has a running status.
	I1124 14:02:24.459682  212383 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa...
	I1124 14:02:24.627125  212383 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 14:02:24.888609  212383 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-609438 --format={{.State.Status}}
	I1124 14:02:24.933748  212383 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 14:02:24.933772  212383 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-609438 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 14:02:25.043026  212383 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-609438 --format={{.State.Status}}
	I1124 14:02:25.089321  212383 machine.go:94] provisionDockerMachine start ...
	I1124 14:02:25.089431  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:25.153799  212383 main.go:143] libmachine: Using SSH client type: native
	I1124 14:02:25.154239  212383 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1124 14:02:25.154258  212383 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:02:25.461029  212383 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-609438
	
	I1124 14:02:25.461072  212383 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-609438"
	I1124 14:02:25.461152  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:25.543103  212383 main.go:143] libmachine: Using SSH client type: native
	I1124 14:02:25.543625  212383 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1124 14:02:25.543643  212383 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-609438 && echo "default-k8s-diff-port-609438" | sudo tee /etc/hostname
	I1124 14:02:25.773225  212383 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-609438
	
	I1124 14:02:25.773297  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:25.800013  212383 main.go:143] libmachine: Using SSH client type: native
	I1124 14:02:25.801080  212383 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1124 14:02:25.801108  212383 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-609438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-609438/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-609438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:02:26.006217  212383 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:02:26.006244  212383 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-2368/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-2368/.minikube}
	I1124 14:02:26.006263  212383 ubuntu.go:190] setting up certificates
	I1124 14:02:26.006272  212383 provision.go:84] configureAuth start
	I1124 14:02:26.006350  212383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-609438
	I1124 14:02:26.026909  212383 provision.go:143] copyHostCerts
	I1124 14:02:26.026970  212383 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem, removing ...
	I1124 14:02:26.026980  212383 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem
	I1124 14:02:26.027046  212383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem (1082 bytes)
	I1124 14:02:26.027134  212383 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem, removing ...
	I1124 14:02:26.027140  212383 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem
	I1124 14:02:26.027166  212383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem (1123 bytes)
	I1124 14:02:26.027243  212383 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem, removing ...
	I1124 14:02:26.027248  212383 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem
	I1124 14:02:26.027271  212383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem (1679 bytes)
	I1124 14:02:26.027316  212383 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-609438 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-609438 localhost minikube]
	I1124 14:02:26.479334  212383 provision.go:177] copyRemoteCerts
	I1124 14:02:26.479453  212383 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:02:26.479529  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:26.509970  212383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa Username:docker}
	I1124 14:02:26.633721  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 14:02:26.665930  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 14:02:26.697677  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1124 14:02:26.732905  212383 provision.go:87] duration metric: took 726.609261ms to configureAuth
	I1124 14:02:26.732938  212383 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:02:26.733137  212383 config.go:182] Loaded profile config "default-k8s-diff-port-609438": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 14:02:26.733153  212383 machine.go:97] duration metric: took 1.643811371s to provisionDockerMachine
	I1124 14:02:26.733161  212383 client.go:176] duration metric: took 7.487822203s to LocalClient.Create
	I1124 14:02:26.733175  212383 start.go:167] duration metric: took 7.487885367s to libmachine.API.Create "default-k8s-diff-port-609438"
	I1124 14:02:26.733189  212383 start.go:293] postStartSetup for "default-k8s-diff-port-609438" (driver="docker")
	I1124 14:02:26.733198  212383 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:02:26.733271  212383 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:02:26.733323  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:26.763570  212383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa Username:docker}
	I1124 14:02:26.897119  212383 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:02:26.901182  212383 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:02:26.901211  212383 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:02:26.901223  212383 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2368/.minikube/addons for local assets ...
	I1124 14:02:26.901281  212383 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2368/.minikube/files for local assets ...
	I1124 14:02:26.901360  212383 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem -> 41782.pem in /etc/ssl/certs
	I1124 14:02:26.901463  212383 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:02:26.909763  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem --> /etc/ssl/certs/41782.pem (1708 bytes)
	I1124 14:02:26.930128  212383 start.go:296] duration metric: took 196.924439ms for postStartSetup
	I1124 14:02:26.930508  212383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-609438
	I1124 14:02:26.950744  212383 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/config.json ...
	I1124 14:02:26.951035  212383 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 14:02:26.951091  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:26.973535  212383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa Username:docker}
	I1124 14:02:27.077778  212383 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 14:02:27.083066  212383 start.go:128] duration metric: took 7.841363739s to createHost
	I1124 14:02:27.083089  212383 start.go:83] releasing machines lock for "default-k8s-diff-port-609438", held for 7.84148292s
	I1124 14:02:27.083163  212383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-609438
	I1124 14:02:27.105539  212383 ssh_runner.go:195] Run: cat /version.json
	I1124 14:02:27.105585  212383 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:02:27.105661  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:27.105589  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:27.149461  212383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa Username:docker}
	I1124 14:02:27.157732  212383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa Username:docker}
	I1124 14:02:27.367320  212383 ssh_runner.go:195] Run: systemctl --version
	I1124 14:02:27.374447  212383 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:02:27.380473  212383 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:02:27.380647  212383 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:02:27.413935  212383 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 14:02:27.414007  212383 start.go:496] detecting cgroup driver to use...
	I1124 14:02:27.414056  212383 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 14:02:27.414133  212383 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 14:02:27.430159  212383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 14:02:27.444285  212383 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:02:27.444392  212383 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:02:27.461944  212383 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:02:27.481645  212383 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:02:27.639351  212383 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:02:27.799286  212383 docker.go:234] disabling docker service ...
	I1124 14:02:27.799350  212383 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:02:27.831375  212383 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:02:27.845484  212383 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:02:27.983498  212383 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:02:28.133537  212383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:02:28.150716  212383 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:02:28.166057  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 14:02:28.175128  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 14:02:28.184145  212383 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1124 14:02:28.184265  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1124 14:02:28.192987  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 14:02:28.202626  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 14:02:28.211553  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 14:02:28.220020  212383 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:02:28.228018  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 14:02:28.236891  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 14:02:28.245507  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 14:02:28.254226  212383 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:02:28.262068  212383 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:02:28.269803  212383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:02:28.442896  212383 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 14:02:28.596361  212383 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 14:02:28.596444  212383 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 14:02:28.602936  212383 start.go:564] Will wait 60s for crictl version
	I1124 14:02:28.603014  212383 ssh_runner.go:195] Run: which crictl
	I1124 14:02:28.607012  212383 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 14:02:28.645174  212383 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 14:02:28.645247  212383 ssh_runner.go:195] Run: containerd --version
	I1124 14:02:28.669934  212383 ssh_runner.go:195] Run: containerd --version
	I1124 14:02:28.700929  212383 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1124 14:02:28.704729  212383 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-609438 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:02:28.734893  212383 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 14:02:28.738862  212383 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:02:28.749508  212383 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-609438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-609438 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:02:28.749613  212383 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 14:02:28.749681  212383 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:02:28.782633  212383 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 14:02:28.782654  212383 containerd.go:534] Images already preloaded, skipping extraction
	I1124 14:02:28.782711  212383 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:02:28.839126  212383 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 14:02:28.839147  212383 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:02:28.839155  212383 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 containerd true true} ...
	I1124 14:02:28.839244  212383 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-609438 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-609438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 14:02:28.839314  212383 ssh_runner.go:195] Run: sudo crictl info
	I1124 14:02:28.874904  212383 cni.go:84] Creating CNI manager for ""
	I1124 14:02:28.874924  212383 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 14:02:28.874940  212383 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 14:02:28.874963  212383 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-609438 NodeName:default-k8s-diff-port-609438 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:02:28.875085  212383 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-609438"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:02:28.875154  212383 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 14:02:28.884597  212383 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:02:28.884669  212383 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:02:25.714459  213570 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 14:02:25.714725  213570 start.go:159] libmachine.API.Create for "embed-certs-593634" (driver="docker")
	I1124 14:02:25.714819  213570 client.go:173] LocalClient.Create starting
	I1124 14:02:25.714954  213570 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem
	I1124 14:02:25.715008  213570 main.go:143] libmachine: Decoding PEM data...
	I1124 14:02:25.715051  213570 main.go:143] libmachine: Parsing certificate...
	I1124 14:02:25.715148  213570 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem
	I1124 14:02:25.715206  213570 main.go:143] libmachine: Decoding PEM data...
	I1124 14:02:25.715261  213570 main.go:143] libmachine: Parsing certificate...
	I1124 14:02:25.715745  213570 cli_runner.go:164] Run: docker network inspect embed-certs-593634 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 14:02:25.736780  213570 cli_runner.go:211] docker network inspect embed-certs-593634 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 14:02:25.736871  213570 network_create.go:284] running [docker network inspect embed-certs-593634] to gather additional debugging logs...
	I1124 14:02:25.736888  213570 cli_runner.go:164] Run: docker network inspect embed-certs-593634
	W1124 14:02:25.769114  213570 cli_runner.go:211] docker network inspect embed-certs-593634 returned with exit code 1
	I1124 14:02:25.769141  213570 network_create.go:287] error running [docker network inspect embed-certs-593634]: docker network inspect embed-certs-593634: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-593634 not found
	I1124 14:02:25.769154  213570 network_create.go:289] output of [docker network inspect embed-certs-593634]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-593634 not found
	
	** /stderr **
	I1124 14:02:25.769257  213570 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:02:25.800766  213570 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e5e15b13860d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:3d:37:c4:cc:77} reservation:<nil>}
	I1124 14:02:25.801103  213570 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-66593a990bce IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:c0:9b:bc:41:ca} reservation:<nil>}
	I1124 14:02:25.801995  213570 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-37e9fb0954cf IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:32:0b:6f:6e:b2:8c} reservation:<nil>}
	I1124 14:02:25.802424  213570 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e9170}
	I1124 14:02:25.802442  213570 network_create.go:124] attempt to create docker network embed-certs-593634 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1124 14:02:25.802493  213570 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-593634 embed-certs-593634
	I1124 14:02:25.881093  213570 network_create.go:108] docker network embed-certs-593634 192.168.76.0/24 created
	I1124 14:02:25.881122  213570 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-593634" container
	I1124 14:02:25.881203  213570 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 14:02:25.903081  213570 cli_runner.go:164] Run: docker volume create embed-certs-593634 --label name.minikube.sigs.k8s.io=embed-certs-593634 --label created_by.minikube.sigs.k8s.io=true
	I1124 14:02:25.931462  213570 oci.go:103] Successfully created a docker volume embed-certs-593634
	I1124 14:02:25.931542  213570 cli_runner.go:164] Run: docker run --rm --name embed-certs-593634-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-593634 --entrypoint /usr/bin/test -v embed-certs-593634:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 14:02:26.581166  213570 oci.go:107] Successfully prepared a docker volume embed-certs-593634
	I1124 14:02:26.581232  213570 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 14:02:26.581244  213570 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 14:02:26.581311  213570 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-593634:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 14:02:28.894421  212383 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1124 14:02:28.909480  212383 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:02:28.924519  212383 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2241 bytes)
	I1124 14:02:28.939585  212383 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 14:02:28.943813  212383 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:02:28.954534  212383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:02:29.104027  212383 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:02:29.125453  212383 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438 for IP: 192.168.85.2
	I1124 14:02:29.125476  212383 certs.go:195] generating shared ca certs ...
	I1124 14:02:29.125503  212383 certs.go:227] acquiring lock for ca certs: {Name:mkcd8707c782acde0e57168c044a3df942dc4ab1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:29.125641  212383 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-2368/.minikube/ca.key
	I1124 14:02:29.125695  212383 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.key
	I1124 14:02:29.125707  212383 certs.go:257] generating profile certs ...
	I1124 14:02:29.125768  212383 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/client.key
	I1124 14:02:29.125789  212383 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/client.crt with IP's: []
	I1124 14:02:29.324459  212383 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/client.crt ...
	I1124 14:02:29.324491  212383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/client.crt: {Name:mk8aada29dd487d5091685276369440b7d624321 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:29.324640  212383 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/client.key ...
	I1124 14:02:29.324656  212383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/client.key: {Name:mka039edce6f440d55864b8259b2b6e6a4166f35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:29.324742  212383 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.key.0b070d75
	I1124 14:02:29.324762  212383 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.crt.0b070d75 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 14:02:29.388053  212383 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.crt.0b070d75 ...
	I1124 14:02:29.388089  212383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.crt.0b070d75: {Name:mk8c33f3dd28832381eccdbc39352bbcf3fad513 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:29.388234  212383 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.key.0b070d75 ...
	I1124 14:02:29.388250  212383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.key.0b070d75: {Name:mk1a2d7229ced6b28d71658195699ecc4e6d6cbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:29.388323  212383 certs.go:382] copying /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.crt.0b070d75 -> /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.crt
	I1124 14:02:29.388407  212383 certs.go:386] copying /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.key.0b070d75 -> /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.key
	I1124 14:02:29.388467  212383 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.key
	I1124 14:02:29.388494  212383 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.crt with IP's: []
	I1124 14:02:29.607942  212383 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.crt ...
	I1124 14:02:29.607978  212383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.crt: {Name:mkf0227a8560a7238360c53d12e60293f9779f1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:29.608133  212383 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.key ...
	I1124 14:02:29.608148  212383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.key: {Name:mkdb69944b7ff660a91a53e6ae6208e817233479 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:29.608326  212383 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178.pem (1338 bytes)
	W1124 14:02:29.608368  212383 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178_empty.pem, impossibly tiny 0 bytes
	I1124 14:02:29.608383  212383 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 14:02:29.608412  212383 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem (1082 bytes)
	I1124 14:02:29.608442  212383 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:02:29.608468  212383 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem (1679 bytes)
	I1124 14:02:29.608515  212383 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem (1708 bytes)
	I1124 14:02:29.609076  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:02:29.626013  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 14:02:29.643798  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:02:29.661375  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 14:02:29.679743  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1124 14:02:29.696528  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 14:02:29.728013  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:02:29.773516  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 14:02:29.805187  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178.pem --> /usr/share/ca-certificates/4178.pem (1338 bytes)
	I1124 14:02:29.826865  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem --> /usr/share/ca-certificates/41782.pem (1708 bytes)
	I1124 14:02:29.847529  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:02:29.867886  212383 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:02:29.882919  212383 ssh_runner.go:195] Run: openssl version
	I1124 14:02:29.889477  212383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41782.pem && ln -fs /usr/share/ca-certificates/41782.pem /etc/ssl/certs/41782.pem"
	I1124 14:02:29.898302  212383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41782.pem
	I1124 14:02:29.904667  212383 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:21 /usr/share/ca-certificates/41782.pem
	I1124 14:02:29.904736  212383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41782.pem
	I1124 14:02:29.948420  212383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:02:29.957558  212383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:02:29.966733  212383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:02:29.970899  212383 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:02:29.970989  212383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:02:30.019996  212383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:02:30.030890  212383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4178.pem && ln -fs /usr/share/ca-certificates/4178.pem /etc/ssl/certs/4178.pem"
	I1124 14:02:30.057890  212383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4178.pem
	I1124 14:02:30.080661  212383 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:21 /usr/share/ca-certificates/4178.pem
	I1124 14:02:30.080813  212383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4178.pem
	I1124 14:02:30.155115  212383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4178.pem /etc/ssl/certs/51391683.0"
	I1124 14:02:30.165475  212383 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:02:30.170978  212383 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 14:02:30.171035  212383 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-609438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-609438 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:02:30.171124  212383 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 14:02:30.171192  212383 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:02:30.211462  212383 cri.go:89] found id: ""
	I1124 14:02:30.211552  212383 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:02:30.226907  212383 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 14:02:30.236649  212383 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 14:02:30.236720  212383 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 14:02:30.248370  212383 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 14:02:30.248462  212383 kubeadm.go:158] found existing configuration files:
	
	I1124 14:02:30.248548  212383 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1124 14:02:30.262084  212383 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 14:02:30.262152  212383 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 14:02:30.270330  212383 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1124 14:02:30.279476  212383 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 14:02:30.279543  212383 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 14:02:30.288703  212383 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1124 14:02:30.297950  212383 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 14:02:30.298023  212383 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 14:02:30.310718  212383 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1124 14:02:30.320531  212383 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 14:02:30.320603  212383 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 14:02:30.329639  212383 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 14:02:30.406424  212383 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 14:02:30.406661  212383 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 14:02:30.479025  212383 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 14:02:31.562417  213570 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-593634:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.981062358s)
	I1124 14:02:31.562447  213570 kic.go:203] duration metric: took 4.981201018s to extract preloaded images to volume ...
	W1124 14:02:31.562585  213570 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 14:02:31.562696  213570 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 14:02:31.653956  213570 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-593634 --name embed-certs-593634 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-593634 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-593634 --network embed-certs-593634 --ip 192.168.76.2 --volume embed-certs-593634:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 14:02:32.104099  213570 cli_runner.go:164] Run: docker container inspect embed-certs-593634 --format={{.State.Running}}
	I1124 14:02:32.133617  213570 cli_runner.go:164] Run: docker container inspect embed-certs-593634 --format={{.State.Status}}
	I1124 14:02:32.170125  213570 cli_runner.go:164] Run: docker exec embed-certs-593634 stat /var/lib/dpkg/alternatives/iptables
	I1124 14:02:32.243591  213570 oci.go:144] the created container "embed-certs-593634" has a running status.
	I1124 14:02:32.243619  213570 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa...
	I1124 14:02:33.008353  213570 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 14:02:33.030437  213570 cli_runner.go:164] Run: docker container inspect embed-certs-593634 --format={{.State.Status}}
	I1124 14:02:33.051118  213570 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 14:02:33.051142  213570 kic_runner.go:114] Args: [docker exec --privileged embed-certs-593634 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 14:02:33.146272  213570 cli_runner.go:164] Run: docker container inspect embed-certs-593634 --format={{.State.Status}}
	I1124 14:02:33.172981  213570 machine.go:94] provisionDockerMachine start ...
	I1124 14:02:33.173175  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:33.203273  213570 main.go:143] libmachine: Using SSH client type: native
	I1124 14:02:33.203611  213570 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1124 14:02:33.203620  213570 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:02:33.204370  213570 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 14:02:36.376430  213570 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-593634
	
	I1124 14:02:36.376458  213570 ubuntu.go:182] provisioning hostname "embed-certs-593634"
	I1124 14:02:36.376538  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:36.401139  213570 main.go:143] libmachine: Using SSH client type: native
	I1124 14:02:36.401453  213570 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1124 14:02:36.401469  213570 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-593634 && echo "embed-certs-593634" | sudo tee /etc/hostname
	I1124 14:02:36.589650  213570 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-593634
	
	I1124 14:02:36.589799  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:36.618006  213570 main.go:143] libmachine: Using SSH client type: native
	I1124 14:02:36.618310  213570 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1124 14:02:36.618326  213570 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-593634' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-593634/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-593634' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:02:36.779947  213570 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:02:36.780024  213570 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-2368/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-2368/.minikube}
	I1124 14:02:36.780065  213570 ubuntu.go:190] setting up certificates
	I1124 14:02:36.780107  213570 provision.go:84] configureAuth start
	I1124 14:02:36.780202  213570 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-593634
	I1124 14:02:36.805555  213570 provision.go:143] copyHostCerts
	I1124 14:02:36.805621  213570 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem, removing ...
	I1124 14:02:36.805629  213570 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem
	I1124 14:02:36.805706  213570 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem (1082 bytes)
	I1124 14:02:36.805804  213570 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem, removing ...
	I1124 14:02:36.805809  213570 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem
	I1124 14:02:36.805834  213570 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem (1123 bytes)
	I1124 14:02:36.805881  213570 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem, removing ...
	I1124 14:02:36.805885  213570 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem
	I1124 14:02:36.805907  213570 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem (1679 bytes)
	I1124 14:02:36.805955  213570 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem org=jenkins.embed-certs-593634 san=[127.0.0.1 192.168.76.2 embed-certs-593634 localhost minikube]
	I1124 14:02:37.074442  213570 provision.go:177] copyRemoteCerts
	I1124 14:02:37.074519  213570 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:02:37.074565  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:37.105113  213570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa Username:docker}
	I1124 14:02:37.228963  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 14:02:37.249359  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 14:02:37.269580  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1124 14:02:37.289369  213570 provision.go:87] duration metric: took 509.223197ms to configureAuth
	I1124 14:02:37.289401  213570 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:02:37.289587  213570 config.go:182] Loaded profile config "embed-certs-593634": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 14:02:37.289602  213570 machine.go:97] duration metric: took 4.11660352s to provisionDockerMachine
	I1124 14:02:37.289609  213570 client.go:176] duration metric: took 11.57476669s to LocalClient.Create
	I1124 14:02:37.289629  213570 start.go:167] duration metric: took 11.574903397s to libmachine.API.Create "embed-certs-593634"
	I1124 14:02:37.289636  213570 start.go:293] postStartSetup for "embed-certs-593634" (driver="docker")
	I1124 14:02:37.289644  213570 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:02:37.289700  213570 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:02:37.289746  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:37.313497  213570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa Username:docker}
	I1124 14:02:37.421261  213570 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:02:37.425376  213570 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:02:37.425402  213570 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:02:37.425413  213570 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2368/.minikube/addons for local assets ...
	I1124 14:02:37.425467  213570 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2368/.minikube/files for local assets ...
	I1124 14:02:37.425546  213570 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem -> 41782.pem in /etc/ssl/certs
	I1124 14:02:37.425648  213570 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:02:37.434170  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem --> /etc/ssl/certs/41782.pem (1708 bytes)
	I1124 14:02:37.454297  213570 start.go:296] duration metric: took 164.646825ms for postStartSetup
	I1124 14:02:37.454768  213570 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-593634
	I1124 14:02:37.473090  213570 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/config.json ...
	I1124 14:02:37.473375  213570 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 14:02:37.473419  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:37.492467  213570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa Username:docker}
	I1124 14:02:37.597996  213570 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 14:02:37.603374  213570 start.go:128] duration metric: took 11.892207017s to createHost
	I1124 14:02:37.603402  213570 start.go:83] releasing machines lock for "embed-certs-593634", held for 11.892340336s
	I1124 14:02:37.603491  213570 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-593634
	I1124 14:02:37.622681  213570 ssh_runner.go:195] Run: cat /version.json
	I1124 14:02:37.622739  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:37.622988  213570 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:02:37.623049  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:37.653121  213570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa Username:docker}
	I1124 14:02:37.661266  213570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa Username:docker}
	I1124 14:02:37.867529  213570 ssh_runner.go:195] Run: systemctl --version
	I1124 14:02:37.880289  213570 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:02:37.885513  213570 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:02:37.885586  213570 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:02:37.919967  213570 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 14:02:37.920041  213570 start.go:496] detecting cgroup driver to use...
	I1124 14:02:37.920090  213570 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 14:02:37.920196  213570 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 14:02:37.939855  213570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 14:02:37.954765  213570 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:02:37.954832  213570 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:02:37.973211  213570 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:02:37.993531  213570 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:02:38.152217  213570 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:02:38.315244  213570 docker.go:234] disabling docker service ...
	I1124 14:02:38.315315  213570 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:02:38.342606  213570 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:02:38.357435  213570 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:02:38.501143  213570 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:02:38.653968  213570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:02:38.670062  213570 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:02:38.691612  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 14:02:38.701736  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 14:02:38.711955  213570 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1124 14:02:38.712108  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1124 14:02:38.722429  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 14:02:38.732416  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 14:02:38.742370  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 14:02:38.752386  213570 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:02:38.761548  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 14:02:38.771322  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 14:02:38.781079  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 14:02:38.790804  213570 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:02:38.799605  213570 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:02:38.808384  213570 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:02:38.957014  213570 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 14:02:39.134468  213570 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 14:02:39.134589  213570 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 14:02:39.138612  213570 start.go:564] Will wait 60s for crictl version
	I1124 14:02:39.138728  213570 ssh_runner.go:195] Run: which crictl
	I1124 14:02:39.142835  213570 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 14:02:39.183049  213570 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 14:02:39.183127  213570 ssh_runner.go:195] Run: containerd --version
	I1124 14:02:39.209644  213570 ssh_runner.go:195] Run: containerd --version
	I1124 14:02:39.242563  213570 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1124 14:02:39.245632  213570 cli_runner.go:164] Run: docker network inspect embed-certs-593634 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:02:39.261116  213570 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 14:02:39.265349  213570 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:02:39.275060  213570 kubeadm.go:884] updating cluster {Name:embed-certs-593634 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-593634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:02:39.275179  213570 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 14:02:39.275240  213570 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:02:39.309584  213570 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 14:02:39.309604  213570 containerd.go:534] Images already preloaded, skipping extraction
	I1124 14:02:39.309666  213570 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:02:39.338298  213570 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 14:02:39.338369  213570 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:02:39.338391  213570 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1124 14:02:39.338540  213570 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-593634 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-593634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 14:02:39.338638  213570 ssh_runner.go:195] Run: sudo crictl info
	I1124 14:02:39.374509  213570 cni.go:84] Creating CNI manager for ""
	I1124 14:02:39.374529  213570 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 14:02:39.374546  213570 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 14:02:39.374567  213570 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-593634 NodeName:embed-certs-593634 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:02:39.374695  213570 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-593634"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:02:39.374758  213570 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 14:02:39.383722  213570 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:02:39.383790  213570 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:02:39.392664  213570 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1124 14:02:39.407366  213570 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:02:39.421539  213570 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1124 14:02:39.435750  213570 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 14:02:39.439949  213570 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:02:39.450067  213570 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:02:39.594389  213570 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:02:39.612637  213570 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634 for IP: 192.168.76.2
	I1124 14:02:39.612654  213570 certs.go:195] generating shared ca certs ...
	I1124 14:02:39.612670  213570 certs.go:227] acquiring lock for ca certs: {Name:mkcd8707c782acde0e57168c044a3df942dc4ab1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:39.612812  213570 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-2368/.minikube/ca.key
	I1124 14:02:39.612861  213570 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.key
	I1124 14:02:39.612868  213570 certs.go:257] generating profile certs ...
	I1124 14:02:39.612921  213570 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/client.key
	I1124 14:02:39.612933  213570 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/client.crt with IP's: []
	I1124 14:02:39.743608  213570 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/client.crt ...
	I1124 14:02:39.743688  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/client.crt: {Name:mkdc127047d7bba99c4ff0de010fa76eaa96351a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:39.743978  213570 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/client.key ...
	I1124 14:02:39.744016  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/client.key: {Name:mk5b65ad154f9ff1864bd2678d53c0d49d42b626 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:39.744181  213570 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.key.20c14e55
	I1124 14:02:39.744223  213570 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.crt.20c14e55 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 14:02:39.792416  213570 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.crt.20c14e55 ...
	I1124 14:02:39.792488  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.crt.20c14e55: {Name:mk898939d3f887dee7ec2cb55d4f9f3c1473f371 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:39.792715  213570 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.key.20c14e55 ...
	I1124 14:02:39.792751  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.key.20c14e55: {Name:mk7634950b7d8fc2f57ae8ad6d2b71e2a24db521 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:39.792893  213570 certs.go:382] copying /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.crt.20c14e55 -> /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.crt
	I1124 14:02:39.793035  213570 certs.go:386] copying /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.key.20c14e55 -> /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.key
	I1124 14:02:39.793197  213570 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.key
	I1124 14:02:39.793218  213570 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.crt with IP's: []
	I1124 14:02:40.512550  213570 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.crt ...
	I1124 14:02:40.512590  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.crt: {Name:mk7e59e3c705bb60e30918ea8dec355fb87a4cdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:40.512783  213570 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.key ...
	I1124 14:02:40.512800  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.key: {Name:mk1c28b0bf985e63e205a9d607bdda54b666c8d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:40.512994  213570 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178.pem (1338 bytes)
	W1124 14:02:40.513046  213570 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178_empty.pem, impossibly tiny 0 bytes
	I1124 14:02:40.513055  213570 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 14:02:40.513084  213570 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem (1082 bytes)
	I1124 14:02:40.513116  213570 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:02:40.513155  213570 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem (1679 bytes)
	I1124 14:02:40.513205  213570 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem (1708 bytes)
	I1124 14:02:40.513807  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:02:40.534476  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 14:02:40.554772  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:02:40.573041  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 14:02:40.592563  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 14:02:40.610272  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 14:02:40.648106  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:02:40.675421  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 14:02:40.712861  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:02:40.741274  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178.pem --> /usr/share/ca-certificates/4178.pem (1338 bytes)
	I1124 14:02:40.775540  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem --> /usr/share/ca-certificates/41782.pem (1708 bytes)
	I1124 14:02:40.810151  213570 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:02:40.834734  213570 ssh_runner.go:195] Run: openssl version
	I1124 14:02:40.841134  213570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4178.pem && ln -fs /usr/share/ca-certificates/4178.pem /etc/ssl/certs/4178.pem"
	I1124 14:02:40.853029  213570 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4178.pem
	I1124 14:02:40.860558  213570 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:21 /usr/share/ca-certificates/4178.pem
	I1124 14:02:40.860626  213570 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4178.pem
	I1124 14:02:40.918401  213570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4178.pem /etc/ssl/certs/51391683.0"
	I1124 14:02:40.928700  213570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41782.pem && ln -fs /usr/share/ca-certificates/41782.pem /etc/ssl/certs/41782.pem"
	I1124 14:02:40.943881  213570 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41782.pem
	I1124 14:02:40.948767  213570 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:21 /usr/share/ca-certificates/41782.pem
	I1124 14:02:40.948833  213570 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41782.pem
	I1124 14:02:41.014703  213570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:02:41.026160  213570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:02:41.039512  213570 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:02:41.046666  213570 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:02:41.046734  213570 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:02:41.111180  213570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:02:41.121762  213570 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:02:41.128022  213570 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 14:02:41.128075  213570 kubeadm.go:401] StartCluster: {Name:embed-certs-593634 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-593634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:02:41.128164  213570 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 14:02:41.128228  213570 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:02:41.181954  213570 cri.go:89] found id: ""
	I1124 14:02:41.182043  213570 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:02:41.192535  213570 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 14:02:41.201483  213570 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 14:02:41.201548  213570 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 14:02:41.210919  213570 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 14:02:41.210940  213570 kubeadm.go:158] found existing configuration files:
	
	I1124 14:02:41.210999  213570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 14:02:41.223268  213570 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 14:02:41.223332  213570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 14:02:41.239377  213570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 14:02:41.251095  213570 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 14:02:41.251165  213570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 14:02:41.259252  213570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 14:02:41.268559  213570 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 14:02:41.268620  213570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 14:02:41.282438  213570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 14:02:41.293894  213570 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 14:02:41.293975  213570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 14:02:41.321578  213570 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 14:02:41.440101  213570 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 14:02:41.445250  213570 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 14:02:41.492866  213570 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 14:02:41.499280  213570 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 14:02:41.499334  213570 kubeadm.go:319] OS: Linux
	I1124 14:02:41.499382  213570 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 14:02:41.499444  213570 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 14:02:41.499504  213570 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 14:02:41.499557  213570 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 14:02:41.499612  213570 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 14:02:41.499666  213570 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 14:02:41.499716  213570 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 14:02:41.499769  213570 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 14:02:41.499820  213570 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 14:02:41.625341  213570 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 14:02:41.625456  213570 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 14:02:41.625558  213570 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 14:02:41.636268  213570 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 14:02:41.641768  213570 out.go:252]   - Generating certificates and keys ...
	I1124 14:02:41.641865  213570 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 14:02:41.641939  213570 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 14:02:42.619223  213570 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 14:02:43.011953  213570 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 14:02:43.483393  213570 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 14:02:43.810126  213570 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 14:02:44.825951  213570 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 14:02:44.828294  213570 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-593634 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 14:02:45.647118  213570 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 14:02:45.647643  213570 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-593634 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 14:02:45.905141  213570 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 14:02:46.000202  213570 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 14:02:46.120215  213570 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 14:02:46.120734  213570 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 14:02:46.900838  213570 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 14:02:47.805102  213570 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 14:02:48.517833  213570 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 14:02:49.348256  213570 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 14:02:49.516941  213570 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 14:02:49.518037  213570 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 14:02:49.520983  213570 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 14:02:49.523689  213570 out.go:252]   - Booting up control plane ...
	I1124 14:02:49.523845  213570 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 14:02:49.523973  213570 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 14:02:49.525837  213570 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 14:02:49.554261  213570 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 14:02:49.554370  213570 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 14:02:49.565946  213570 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 14:02:49.567436  213570 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 14:02:49.571311  213570 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 14:02:49.806053  213570 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 14:02:49.806172  213570 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 14:02:52.457159  212383 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 14:02:52.457215  212383 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 14:02:52.457303  212383 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 14:02:52.457359  212383 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 14:02:52.457393  212383 kubeadm.go:319] OS: Linux
	I1124 14:02:52.457438  212383 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 14:02:52.457486  212383 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 14:02:52.457532  212383 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 14:02:52.457580  212383 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 14:02:52.457628  212383 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 14:02:52.457682  212383 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 14:02:52.457728  212383 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 14:02:52.457775  212383 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 14:02:52.457821  212383 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 14:02:52.457893  212383 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 14:02:52.457987  212383 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 14:02:52.458077  212383 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 14:02:52.458138  212383 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 14:02:52.461386  212383 out.go:252]   - Generating certificates and keys ...
	I1124 14:02:52.461491  212383 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 14:02:52.461556  212383 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 14:02:52.461623  212383 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 14:02:52.461680  212383 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 14:02:52.461741  212383 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 14:02:52.461791  212383 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 14:02:52.461845  212383 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 14:02:52.461977  212383 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-609438 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 14:02:52.462028  212383 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 14:02:52.462157  212383 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-609438 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 14:02:52.462223  212383 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 14:02:52.462287  212383 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 14:02:52.462339  212383 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 14:02:52.462402  212383 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 14:02:52.462458  212383 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 14:02:52.462521  212383 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 14:02:52.462611  212383 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 14:02:52.462674  212383 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 14:02:52.462729  212383 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 14:02:52.462820  212383 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 14:02:52.462893  212383 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 14:02:52.465845  212383 out.go:252]   - Booting up control plane ...
	I1124 14:02:52.466035  212383 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 14:02:52.466163  212383 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 14:02:52.466242  212383 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 14:02:52.466364  212383 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 14:02:52.466465  212383 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 14:02:52.466577  212383 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 14:02:52.466668  212383 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 14:02:52.466709  212383 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 14:02:52.466848  212383 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 14:02:52.466960  212383 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 14:02:52.467024  212383 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.018392479s
	I1124 14:02:52.467123  212383 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 14:02:52.467209  212383 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1124 14:02:52.467305  212383 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 14:02:52.467389  212383 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 14:02:52.467470  212383 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.741501846s
	I1124 14:02:52.467552  212383 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.503243598s
	I1124 14:02:52.467627  212383 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 10.824874472s
	I1124 14:02:52.467741  212383 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 14:02:52.467875  212383 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 14:02:52.467955  212383 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 14:02:52.468176  212383 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-609438 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 14:02:52.468237  212383 kubeadm.go:319] [bootstrap-token] Using token: vzq4ay.serxkml6gk1378wv
	I1124 14:02:52.471358  212383 out.go:252]   - Configuring RBAC rules ...
	I1124 14:02:52.471499  212383 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 14:02:52.471591  212383 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 14:02:52.471743  212383 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 14:02:52.471880  212383 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 14:02:52.472017  212383 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 14:02:52.472112  212383 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 14:02:52.472236  212383 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 14:02:52.472282  212383 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 14:02:52.472331  212383 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 14:02:52.472335  212383 kubeadm.go:319] 
	I1124 14:02:52.472400  212383 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 14:02:52.472411  212383 kubeadm.go:319] 
	I1124 14:02:52.472495  212383 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 14:02:52.472499  212383 kubeadm.go:319] 
	I1124 14:02:52.472526  212383 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 14:02:52.472589  212383 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 14:02:52.472643  212383 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 14:02:52.472647  212383 kubeadm.go:319] 
	I1124 14:02:52.472705  212383 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 14:02:52.472709  212383 kubeadm.go:319] 
	I1124 14:02:52.472759  212383 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 14:02:52.472763  212383 kubeadm.go:319] 
	I1124 14:02:52.472819  212383 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 14:02:52.472899  212383 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 14:02:52.472973  212383 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 14:02:52.472976  212383 kubeadm.go:319] 
	I1124 14:02:52.473067  212383 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 14:02:52.473150  212383 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 14:02:52.473154  212383 kubeadm.go:319] 
	I1124 14:02:52.473251  212383 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token vzq4ay.serxkml6gk1378wv \
	I1124 14:02:52.473364  212383 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aa948289582a95f47bab77808ca51e5d74f41a914fe1740ab9448815f8011aac \
	I1124 14:02:52.473385  212383 kubeadm.go:319] 	--control-plane 
	I1124 14:02:52.473389  212383 kubeadm.go:319] 
	I1124 14:02:52.473481  212383 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 14:02:52.473484  212383 kubeadm.go:319] 
	I1124 14:02:52.473573  212383 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token vzq4ay.serxkml6gk1378wv \
	I1124 14:02:52.473696  212383 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aa948289582a95f47bab77808ca51e5d74f41a914fe1740ab9448815f8011aac 
	I1124 14:02:52.473705  212383 cni.go:84] Creating CNI manager for ""
	I1124 14:02:52.473711  212383 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 14:02:52.476852  212383 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 14:02:52.479922  212383 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 14:02:52.489605  212383 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 14:02:52.489623  212383 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 14:02:52.536790  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 14:02:53.413438  212383 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 14:02:53.413571  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:53.413654  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-609438 minikube.k8s.io/updated_at=2025_11_24T14_02_53_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=default-k8s-diff-port-609438 minikube.k8s.io/primary=true
	I1124 14:02:53.507283  212383 ops.go:34] apiserver oom_adj: -16
	I1124 14:02:53.863033  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:50.808351  213570 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002003298s
	I1124 14:02:50.815187  213570 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 14:02:50.815743  213570 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1124 14:02:50.816608  213570 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 14:02:50.818559  213570 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 14:02:54.363074  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:54.863777  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:55.363086  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:55.863114  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:56.363110  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:56.863441  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:57.057097  212383 kubeadm.go:1114] duration metric: took 3.643574546s to wait for elevateKubeSystemPrivileges
	I1124 14:02:57.057124  212383 kubeadm.go:403] duration metric: took 26.886093324s to StartCluster
	I1124 14:02:57.057141  212383 settings.go:142] acquiring lock: {Name:mk2b0bbff4d8ced468f457362668d43b813dc062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:57.057204  212383 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-2368/kubeconfig
	I1124 14:02:57.057903  212383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/kubeconfig: {Name:mk246d21eaffbd8aca2abdc1b2f89d6fcc902f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:57.058100  212383 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 14:02:57.058223  212383 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 14:02:57.058472  212383 config.go:182] Loaded profile config "default-k8s-diff-port-609438": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 14:02:57.058507  212383 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:02:57.058563  212383 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-609438"
	I1124 14:02:57.058577  212383 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-609438"
	I1124 14:02:57.058598  212383 host.go:66] Checking if "default-k8s-diff-port-609438" exists ...
	I1124 14:02:57.059105  212383 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-609438 --format={{.State.Status}}
	I1124 14:02:57.059672  212383 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-609438"
	I1124 14:02:57.059698  212383 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-609438"
	I1124 14:02:57.060034  212383 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-609438 --format={{.State.Status}}
	I1124 14:02:57.062295  212383 out.go:179] * Verifying Kubernetes components...
	I1124 14:02:57.067608  212383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:02:57.096470  212383 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 14:02:57.100431  212383 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:02:57.100453  212383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:02:57.100520  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:57.108007  212383 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-609438"
	I1124 14:02:57.108047  212383 host.go:66] Checking if "default-k8s-diff-port-609438" exists ...
	I1124 14:02:57.108469  212383 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-609438 --format={{.State.Status}}
	I1124 14:02:57.150290  212383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa Username:docker}
	I1124 14:02:57.151191  212383 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 14:02:57.151207  212383 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 14:02:57.151270  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:57.180229  212383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa Username:docker}
	I1124 14:02:57.835181  212383 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:02:57.835375  212383 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 14:02:57.843296  212383 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 14:02:58.048720  212383 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:02:55.577519  213570 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.75919955s
	I1124 14:02:57.488695  213570 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.669631688s
	I1124 14:02:59.319576  213570 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.503330978s
	I1124 14:02:59.347736  213570 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 14:02:59.365960  213570 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 14:02:59.389045  213570 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 14:02:59.389257  213570 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-593634 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 14:02:59.404075  213570 kubeadm.go:319] [bootstrap-token] Using token: sdluey.txxijid8fmo5jyau
	I1124 14:02:59.018640  212383 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.183422592s)
	I1124 14:02:59.019392  212383 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-609438" to be "Ready" ...
	I1124 14:02:59.019719  212383 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.176349884s)
	I1124 14:02:59.020165  212383 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.184766141s)
	I1124 14:02:59.020204  212383 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 14:02:59.505284  212383 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.456466205s)
	I1124 14:02:59.508376  212383 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1124 14:02:59.407186  213570 out.go:252]   - Configuring RBAC rules ...
	I1124 14:02:59.407326  213570 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 14:02:59.413876  213570 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 14:02:59.424114  213570 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 14:02:59.429247  213570 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 14:02:59.435888  213570 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 14:02:59.441214  213570 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 14:02:59.729166  213570 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 14:03:00.281783  213570 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 14:03:00.726578  213570 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 14:03:00.731583  213570 kubeadm.go:319] 
	I1124 14:03:00.731683  213570 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 14:03:00.731705  213570 kubeadm.go:319] 
	I1124 14:03:00.731783  213570 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 14:03:00.731791  213570 kubeadm.go:319] 
	I1124 14:03:00.731817  213570 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 14:03:00.731879  213570 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 14:03:00.731955  213570 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 14:03:00.731964  213570 kubeadm.go:319] 
	I1124 14:03:00.732019  213570 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 14:03:00.732029  213570 kubeadm.go:319] 
	I1124 14:03:00.732077  213570 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 14:03:00.732085  213570 kubeadm.go:319] 
	I1124 14:03:00.732143  213570 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 14:03:00.732222  213570 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 14:03:00.732296  213570 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 14:03:00.732305  213570 kubeadm.go:319] 
	I1124 14:03:00.732391  213570 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 14:03:00.732470  213570 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 14:03:00.732477  213570 kubeadm.go:319] 
	I1124 14:03:00.732562  213570 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token sdluey.txxijid8fmo5jyau \
	I1124 14:03:00.732674  213570 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aa948289582a95f47bab77808ca51e5d74f41a914fe1740ab9448815f8011aac \
	I1124 14:03:00.732700  213570 kubeadm.go:319] 	--control-plane 
	I1124 14:03:00.732708  213570 kubeadm.go:319] 
	I1124 14:03:00.732793  213570 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 14:03:00.732801  213570 kubeadm.go:319] 
	I1124 14:03:00.732883  213570 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token sdluey.txxijid8fmo5jyau \
	I1124 14:03:00.732989  213570 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aa948289582a95f47bab77808ca51e5d74f41a914fe1740ab9448815f8011aac 
	I1124 14:03:00.734466  213570 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 14:03:00.734704  213570 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 14:03:00.734818  213570 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 14:03:00.734840  213570 cni.go:84] Creating CNI manager for ""
	I1124 14:03:00.734847  213570 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 14:03:00.738356  213570 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 14:02:59.511261  212383 addons.go:530] duration metric: took 2.452743621s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1124 14:02:59.527883  212383 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-609438" context rescaled to 1 replicas
	W1124 14:03:01.022799  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:03.522484  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	I1124 14:03:00.741285  213570 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 14:03:00.747200  213570 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 14:03:00.747222  213570 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 14:03:00.762942  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 14:03:01.083756  213570 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 14:03:01.083943  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:01.084029  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-593634 minikube.k8s.io/updated_at=2025_11_24T14_03_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=embed-certs-593634 minikube.k8s.io/primary=true
	I1124 14:03:01.235259  213570 ops.go:34] apiserver oom_adj: -16
	I1124 14:03:01.235388  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:01.736213  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:02.235575  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:02.735531  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:03.235547  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:03.735985  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:04.235605  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:04.735509  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:05.235491  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:05.735597  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:05.862499  213570 kubeadm.go:1114] duration metric: took 4.778639859s to wait for elevateKubeSystemPrivileges
	I1124 14:03:05.862539  213570 kubeadm.go:403] duration metric: took 24.734468729s to StartCluster
	I1124 14:03:05.862559  213570 settings.go:142] acquiring lock: {Name:mk2b0bbff4d8ced468f457362668d43b813dc062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:03:05.862641  213570 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-2368/kubeconfig
	I1124 14:03:05.864034  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/kubeconfig: {Name:mk246d21eaffbd8aca2abdc1b2f89d6fcc902f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:03:05.864291  213570 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 14:03:05.864292  213570 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 14:03:05.864627  213570 config.go:182] Loaded profile config "embed-certs-593634": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 14:03:05.864675  213570 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:03:05.864760  213570 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-593634"
	I1124 14:03:05.864775  213570 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-593634"
	I1124 14:03:05.864814  213570 host.go:66] Checking if "embed-certs-593634" exists ...
	I1124 14:03:05.865448  213570 cli_runner.go:164] Run: docker container inspect embed-certs-593634 --format={{.State.Status}}
	I1124 14:03:05.865928  213570 addons.go:70] Setting default-storageclass=true in profile "embed-certs-593634"
	I1124 14:03:05.865962  213570 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-593634"
	I1124 14:03:05.866329  213570 cli_runner.go:164] Run: docker container inspect embed-certs-593634 --format={{.State.Status}}
	I1124 14:03:05.867882  213570 out.go:179] * Verifying Kubernetes components...
	I1124 14:03:05.871678  213570 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:03:05.918376  213570 addons.go:239] Setting addon default-storageclass=true in "embed-certs-593634"
	I1124 14:03:05.918427  213570 host.go:66] Checking if "embed-certs-593634" exists ...
	I1124 14:03:05.919006  213570 cli_runner.go:164] Run: docker container inspect embed-certs-593634 --format={{.State.Status}}
	I1124 14:03:05.928779  213570 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 14:03:05.931678  213570 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:03:05.931712  213570 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:03:05.931788  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:03:05.962335  213570 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 14:03:05.962376  213570 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 14:03:05.962476  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:03:05.993403  213570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa Username:docker}
	I1124 14:03:06.003508  213570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa Username:docker}
	I1124 14:03:06.391385  213570 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 14:03:06.391488  213570 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:03:06.435021  213570 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 14:03:06.439159  213570 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:03:06.771396  213570 node_ready.go:35] waiting up to 6m0s for node "embed-certs-593634" to be "Ready" ...
	I1124 14:03:06.771837  213570 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1124 14:03:07.089005  213570 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1124 14:03:06.022254  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:08.023381  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	I1124 14:03:07.091942  213570 addons.go:530] duration metric: took 1.22725676s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1124 14:03:07.275615  213570 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-593634" context rescaled to 1 replicas
	W1124 14:03:08.774304  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:10.522868  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:12.525848  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:10.776272  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:13.274310  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:15.274775  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:14.526016  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:17.023060  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:17.774691  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:20.274332  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:19.523467  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:21.524121  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:23.524697  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:22.774276  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:24.775051  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:26.022538  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:28.023018  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:27.274791  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:29.275073  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:30.030420  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:32.524753  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:31.774872  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:34.274493  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:35.023155  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:37.025173  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:36.275275  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:38.774804  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	I1124 14:03:39.023101  212383 node_ready.go:49] node "default-k8s-diff-port-609438" is "Ready"
	I1124 14:03:39.023134  212383 node_ready.go:38] duration metric: took 40.003724122s for node "default-k8s-diff-port-609438" to be "Ready" ...
	I1124 14:03:39.023149  212383 api_server.go:52] waiting for apiserver process to appear ...
	I1124 14:03:39.023211  212383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:03:39.035892  212383 api_server.go:72] duration metric: took 41.977763431s to wait for apiserver process to appear ...
	I1124 14:03:39.035957  212383 api_server.go:88] waiting for apiserver healthz status ...
	I1124 14:03:39.035992  212383 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1124 14:03:39.045601  212383 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1124 14:03:39.046766  212383 api_server.go:141] control plane version: v1.34.1
	I1124 14:03:39.046790  212383 api_server.go:131] duration metric: took 10.8162ms to wait for apiserver health ...
	I1124 14:03:39.046799  212383 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 14:03:39.057366  212383 system_pods.go:59] 8 kube-system pods found
	I1124 14:03:39.057464  212383 system_pods.go:61] "coredns-66bc5c9577-qctbs" [cd79b5ea-c0d8-4b5c-94a4-6743b53cd9de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:03:39.057486  212383 system_pods.go:61] "etcd-default-k8s-diff-port-609438" [3e2d5715-12d7-441e-9747-edb4c6f78577] Running
	I1124 14:03:39.057527  212383 system_pods.go:61] "kindnet-jcqb9" [92836c58-7b28-4b1b-838d-9491cd23823b] Running
	I1124 14:03:39.057552  212383 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-609438" [b6e69d70-9c7f-4b06-8ba8-a37c17d79bb9] Running
	I1124 14:03:39.057573  212383 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-609438" [a1dba2ac-ba3c-4282-966e-c7abffbb6b9a] Running
	I1124 14:03:39.057612  212383 system_pods.go:61] "kube-proxy-frlpg" [814cc9f1-7449-431c-a35d-3ac3b4d05db9] Running
	I1124 14:03:39.057637  212383 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-609438" [a87b6471-6253-4b9c-abd1-83d029df6343] Running
	I1124 14:03:39.057664  212383 system_pods.go:61] "storage-provisioner" [98d7eb97-3a94-4904-9af3-f063689cec40] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:03:39.057702  212383 system_pods.go:74] duration metric: took 10.895381ms to wait for pod list to return data ...
	I1124 14:03:39.057729  212383 default_sa.go:34] waiting for default service account to be created ...
	I1124 14:03:39.068310  212383 default_sa.go:45] found service account: "default"
	I1124 14:03:39.068335  212383 default_sa.go:55] duration metric: took 10.585051ms for default service account to be created ...
	I1124 14:03:39.068346  212383 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 14:03:39.072487  212383 system_pods.go:86] 8 kube-system pods found
	I1124 14:03:39.072578  212383 system_pods.go:89] "coredns-66bc5c9577-qctbs" [cd79b5ea-c0d8-4b5c-94a4-6743b53cd9de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:03:39.072601  212383 system_pods.go:89] "etcd-default-k8s-diff-port-609438" [3e2d5715-12d7-441e-9747-edb4c6f78577] Running
	I1124 14:03:39.072648  212383 system_pods.go:89] "kindnet-jcqb9" [92836c58-7b28-4b1b-838d-9491cd23823b] Running
	I1124 14:03:39.072673  212383 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-609438" [b6e69d70-9c7f-4b06-8ba8-a37c17d79bb9] Running
	I1124 14:03:39.072696  212383 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-609438" [a1dba2ac-ba3c-4282-966e-c7abffbb6b9a] Running
	I1124 14:03:39.072735  212383 system_pods.go:89] "kube-proxy-frlpg" [814cc9f1-7449-431c-a35d-3ac3b4d05db9] Running
	I1124 14:03:39.072761  212383 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-609438" [a87b6471-6253-4b9c-abd1-83d029df6343] Running
	I1124 14:03:39.072785  212383 system_pods.go:89] "storage-provisioner" [98d7eb97-3a94-4904-9af3-f063689cec40] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:03:39.072847  212383 retry.go:31] will retry after 264.799989ms: missing components: kube-dns
	I1124 14:03:39.342534  212383 system_pods.go:86] 8 kube-system pods found
	I1124 14:03:39.342686  212383 system_pods.go:89] "coredns-66bc5c9577-qctbs" [cd79b5ea-c0d8-4b5c-94a4-6743b53cd9de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:03:39.342725  212383 system_pods.go:89] "etcd-default-k8s-diff-port-609438" [3e2d5715-12d7-441e-9747-edb4c6f78577] Running
	I1124 14:03:39.342754  212383 system_pods.go:89] "kindnet-jcqb9" [92836c58-7b28-4b1b-838d-9491cd23823b] Running
	I1124 14:03:39.342775  212383 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-609438" [b6e69d70-9c7f-4b06-8ba8-a37c17d79bb9] Running
	I1124 14:03:39.342816  212383 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-609438" [a1dba2ac-ba3c-4282-966e-c7abffbb6b9a] Running
	I1124 14:03:39.342842  212383 system_pods.go:89] "kube-proxy-frlpg" [814cc9f1-7449-431c-a35d-3ac3b4d05db9] Running
	I1124 14:03:39.342864  212383 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-609438" [a87b6471-6253-4b9c-abd1-83d029df6343] Running
	I1124 14:03:39.342912  212383 system_pods.go:89] "storage-provisioner" [98d7eb97-3a94-4904-9af3-f063689cec40] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:03:39.342941  212383 retry.go:31] will retry after 272.670872ms: missing components: kube-dns
	I1124 14:03:39.626215  212383 system_pods.go:86] 8 kube-system pods found
	I1124 14:03:39.626242  212383 system_pods.go:89] "coredns-66bc5c9577-qctbs" [cd79b5ea-c0d8-4b5c-94a4-6743b53cd9de] Running
	I1124 14:03:39.626248  212383 system_pods.go:89] "etcd-default-k8s-diff-port-609438" [3e2d5715-12d7-441e-9747-edb4c6f78577] Running
	I1124 14:03:39.626254  212383 system_pods.go:89] "kindnet-jcqb9" [92836c58-7b28-4b1b-838d-9491cd23823b] Running
	I1124 14:03:39.626258  212383 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-609438" [b6e69d70-9c7f-4b06-8ba8-a37c17d79bb9] Running
	I1124 14:03:39.626271  212383 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-609438" [a1dba2ac-ba3c-4282-966e-c7abffbb6b9a] Running
	I1124 14:03:39.626274  212383 system_pods.go:89] "kube-proxy-frlpg" [814cc9f1-7449-431c-a35d-3ac3b4d05db9] Running
	I1124 14:03:39.626278  212383 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-609438" [a87b6471-6253-4b9c-abd1-83d029df6343] Running
	I1124 14:03:39.626282  212383 system_pods.go:89] "storage-provisioner" [98d7eb97-3a94-4904-9af3-f063689cec40] Running
	I1124 14:03:39.626289  212383 system_pods.go:126] duration metric: took 557.937565ms to wait for k8s-apps to be running ...
	I1124 14:03:39.626297  212383 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 14:03:39.626351  212383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:03:39.649756  212383 system_svc.go:56] duration metric: took 23.432209ms WaitForService to wait for kubelet
	I1124 14:03:39.649833  212383 kubeadm.go:587] duration metric: took 42.591709093s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:03:39.649867  212383 node_conditions.go:102] verifying NodePressure condition ...
	I1124 14:03:39.658388  212383 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 14:03:39.658418  212383 node_conditions.go:123] node cpu capacity is 2
	I1124 14:03:39.658433  212383 node_conditions.go:105] duration metric: took 8.545281ms to run NodePressure ...
	I1124 14:03:39.658445  212383 start.go:242] waiting for startup goroutines ...
	I1124 14:03:39.658453  212383 start.go:247] waiting for cluster config update ...
	I1124 14:03:39.658464  212383 start.go:256] writing updated cluster config ...
	I1124 14:03:39.658759  212383 ssh_runner.go:195] Run: rm -f paused
	I1124 14:03:39.662925  212383 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:03:39.668038  212383 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qctbs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:39.673734  212383 pod_ready.go:94] pod "coredns-66bc5c9577-qctbs" is "Ready"
	I1124 14:03:39.673815  212383 pod_ready.go:86] duration metric: took 5.694049ms for pod "coredns-66bc5c9577-qctbs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:39.676472  212383 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:39.685362  212383 pod_ready.go:94] pod "etcd-default-k8s-diff-port-609438" is "Ready"
	I1124 14:03:39.685439  212383 pod_ready.go:86] duration metric: took 8.894816ms for pod "etcd-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:39.688312  212383 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:39.695577  212383 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-609438" is "Ready"
	I1124 14:03:39.695663  212383 pod_ready.go:86] duration metric: took 7.234136ms for pod "kube-apiserver-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:39.698560  212383 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:40.070303  212383 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-609438" is "Ready"
	I1124 14:03:40.070379  212383 pod_ready.go:86] duration metric: took 371.738474ms for pod "kube-controller-manager-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:40.267521  212383 pod_ready.go:83] waiting for pod "kube-proxy-frlpg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:40.667723  212383 pod_ready.go:94] pod "kube-proxy-frlpg" is "Ready"
	I1124 14:03:40.667753  212383 pod_ready.go:86] duration metric: took 400.161589ms for pod "kube-proxy-frlpg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:40.868901  212383 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:41.268703  212383 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-609438" is "Ready"
	I1124 14:03:41.268732  212383 pod_ready.go:86] duration metric: took 399.797357ms for pod "kube-scheduler-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:41.268746  212383 pod_ready.go:40] duration metric: took 1.605732693s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:03:41.331086  212383 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 14:03:41.336425  212383 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-609438" cluster and "default" namespace by default
	W1124 14:03:41.279143  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:43.774833  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	9c4f0887e02e4       1611cd07b61d5       6 seconds ago        Running             busybox                   0                   5b44381c74ffd       busybox                                                default
	00dfaea3cc3d9       ba04bb24b9575       11 seconds ago       Running             storage-provisioner       0                   abcbb29d89b8e       storage-provisioner                                    kube-system
	ed166e253240c       138784d87c9c5       11 seconds ago       Running             coredns                   0                   d4e02c124f709       coredns-66bc5c9577-qctbs                               kube-system
	0d8cc01f3acbd       05baa95f5142d       52 seconds ago       Running             kube-proxy                0                   9d1ee823a15c2       kube-proxy-frlpg                                       kube-system
	d0f3f67b1f102       b1a8c6f707935       52 seconds ago       Running             kindnet-cni               0                   f1a6d1e17d43d       kindnet-jcqb9                                          kube-system
	459ad362844ec       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   d34c569cc4626       kube-scheduler-default-k8s-diff-port-609438            kube-system
	eb13e64310f28       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   4bcf9deea4a1d       kube-controller-manager-default-k8s-diff-port-609438   kube-system
	be628a67cb3ed       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   6f6ebbd6fbb40       kube-apiserver-default-k8s-diff-port-609438            kube-system
	a79dfb2c6db31       a1894772a478e       About a minute ago   Running             etcd                      0                   2ad7a160ea4de       etcd-default-k8s-diff-port-609438                      kube-system
	
	
	==> containerd <==
	Nov 24 14:03:39 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:39.269240975Z" level=info msg="CreateContainer within sandbox \"d4e02c124f709296589df54e8f7f93d43ee806dccbd26464d609201e03032544\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ed166e253240cdbdfc56301dbd8d8567b59792fb85b0d0dbd0d72189e5a069d5\""
	Nov 24 14:03:39 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:39.271167743Z" level=info msg="StartContainer for \"ed166e253240cdbdfc56301dbd8d8567b59792fb85b0d0dbd0d72189e5a069d5\""
	Nov 24 14:03:39 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:39.276670117Z" level=info msg="connecting to shim ed166e253240cdbdfc56301dbd8d8567b59792fb85b0d0dbd0d72189e5a069d5" address="unix:///run/containerd/s/3852ad87deb539a683ba63f41c208f0c64160eea58fb3338df14e43fd97e9a37" protocol=ttrpc version=3
	Nov 24 14:03:39 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:39.279697844Z" level=info msg="Container 00dfaea3cc3d9d681a16a40db39e8b36acc58147c4a4bcba29b9f0947732bc54: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 14:03:39 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:39.293323759Z" level=info msg="CreateContainer within sandbox \"abcbb29d89b8effef39f23c0f3f77af0f2383dff37fdf8b1ab9e42b1a8a9a5e9\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"00dfaea3cc3d9d681a16a40db39e8b36acc58147c4a4bcba29b9f0947732bc54\""
	Nov 24 14:03:39 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:39.296512703Z" level=info msg="StartContainer for \"00dfaea3cc3d9d681a16a40db39e8b36acc58147c4a4bcba29b9f0947732bc54\""
	Nov 24 14:03:39 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:39.299384294Z" level=info msg="connecting to shim 00dfaea3cc3d9d681a16a40db39e8b36acc58147c4a4bcba29b9f0947732bc54" address="unix:///run/containerd/s/a9c241791f911861b5cfcd3b9aec455e35e631195cc17f0ac97e7cb03001f314" protocol=ttrpc version=3
	Nov 24 14:03:39 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:39.363080677Z" level=info msg="StartContainer for \"ed166e253240cdbdfc56301dbd8d8567b59792fb85b0d0dbd0d72189e5a069d5\" returns successfully"
	Nov 24 14:03:39 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:39.403834064Z" level=info msg="StartContainer for \"00dfaea3cc3d9d681a16a40db39e8b36acc58147c4a4bcba29b9f0947732bc54\" returns successfully"
	Nov 24 14:03:41 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:41.908603439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:ad098064-4a27-4674-9c05-03b1e253a816,Namespace:default,Attempt:0,}"
	Nov 24 14:03:41 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:41.958212830Z" level=info msg="connecting to shim 5b44381c74ffdb59c1d068c8d245c0227120a165ab453544aa62d965abc8e01b" address="unix:///run/containerd/s/305b5a19ae168cde4a06a02c0e2cd9e74d7b68984a3b039bcd720f4b331aa00b" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 14:03:42 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:42.028125441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:ad098064-4a27-4674-9c05-03b1e253a816,Namespace:default,Attempt:0,} returns sandbox id \"5b44381c74ffdb59c1d068c8d245c0227120a165ab453544aa62d965abc8e01b\""
	Nov 24 14:03:42 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:42.032618827Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 14:03:44 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:44.240175136Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 14:03:44 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:44.242061787Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937186"
	Nov 24 14:03:44 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:44.244537943Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 14:03:44 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:44.247794400Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 14:03:44 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:44.248489579Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.21564958s"
	Nov 24 14:03:44 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:44.248615816Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 24 14:03:44 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:44.255545070Z" level=info msg="CreateContainer within sandbox \"5b44381c74ffdb59c1d068c8d245c0227120a165ab453544aa62d965abc8e01b\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 14:03:44 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:44.268514231Z" level=info msg="Container 9c4f0887e02e4e2a389390604a999bdeb395e0061f85b2733f3f009c841ec536: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 14:03:44 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:44.279713475Z" level=info msg="CreateContainer within sandbox \"5b44381c74ffdb59c1d068c8d245c0227120a165ab453544aa62d965abc8e01b\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"9c4f0887e02e4e2a389390604a999bdeb395e0061f85b2733f3f009c841ec536\""
	Nov 24 14:03:44 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:44.280670048Z" level=info msg="StartContainer for \"9c4f0887e02e4e2a389390604a999bdeb395e0061f85b2733f3f009c841ec536\""
	Nov 24 14:03:44 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:44.281844421Z" level=info msg="connecting to shim 9c4f0887e02e4e2a389390604a999bdeb395e0061f85b2733f3f009c841ec536" address="unix:///run/containerd/s/305b5a19ae168cde4a06a02c0e2cd9e74d7b68984a3b039bcd720f4b331aa00b" protocol=ttrpc version=3
	Nov 24 14:03:44 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:44.340743232Z" level=info msg="StartContainer for \"9c4f0887e02e4e2a389390604a999bdeb395e0061f85b2733f3f009c841ec536\" returns successfully"
	
	
	==> coredns [ed166e253240cdbdfc56301dbd8d8567b59792fb85b0d0dbd0d72189e5a069d5] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47448 - 6948 "HINFO IN 4773065237209705457.4329358106017141151. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03988351s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-609438
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-609438
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=default-k8s-diff-port-609438
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T14_02_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 14:02:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-609438
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:03:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:03:38 +0000   Mon, 24 Nov 2025 14:02:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:03:38 +0000   Mon, 24 Nov 2025 14:02:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:03:38 +0000   Mon, 24 Nov 2025 14:02:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 14:03:38 +0000   Mon, 24 Nov 2025 14:03:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-609438
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                cceb120a-9f59-48c4-a660-aa41bd8d88a2
	  Boot ID:                    dd480c26-e101-4930-b98c-54c06b430fdc
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-qctbs                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     54s
	  kube-system                 etcd-default-k8s-diff-port-609438                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         58s
	  kube-system                 kindnet-jcqb9                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-apiserver-default-k8s-diff-port-609438             250m (12%)    0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-609438    200m (10%)    0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-proxy-frlpg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-default-k8s-diff-port-609438             100m (5%)     0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 52s                kube-proxy       
	  Normal   NodeHasSufficientMemory  71s (x8 over 71s)  kubelet          Node default-k8s-diff-port-609438 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    71s (x8 over 71s)  kubelet          Node default-k8s-diff-port-609438 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     71s (x7 over 71s)  kubelet          Node default-k8s-diff-port-609438 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  71s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 59s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  59s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  59s                kubelet          Node default-k8s-diff-port-609438 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s                kubelet          Node default-k8s-diff-port-609438 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s                kubelet          Node default-k8s-diff-port-609438 status is now: NodeHasSufficientPID
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           55s                node-controller  Node default-k8s-diff-port-609438 event: Registered Node default-k8s-diff-port-609438 in Controller
	  Normal   NodeReady                13s                kubelet          Node default-k8s-diff-port-609438 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 12:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014697] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497291] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033884] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.804993] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.476130] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [a79dfb2c6db3185c247a8edea7f54f9694063835ada40e0d4f8bb18721962197] <==
	{"level":"warn","ts":"2025-11-24T14:02:45.852115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:45.891272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:45.932613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:45.942867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:45.970765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.000119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.026133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.061726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.089536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.155658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.168663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.182002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.212157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.258107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.268641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.314942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.343824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.367560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.408164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.434863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.458611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.496063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.518190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.539111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.684310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55708","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:03:51 up  1:46,  0 user,  load average: 2.87, 3.43, 3.04
	Linux default-k8s-diff-port-609438 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d0f3f67b1f102d80491052bfec95c49cc4eadbe3bff4a7d6a3ed0fd779addfd1] <==
	I1124 14:02:58.476159       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:02:58.476391       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 14:02:58.476496       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:02:58.476508       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:02:58.476521       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:02:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:02:58.686498       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:02:58.686564       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:02:58.686574       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:02:58.760440       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 14:03:28.686515       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 14:03:28.686515       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 14:03:28.687855       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 14:03:28.761422       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1124 14:03:29.687082       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 14:03:29.687377       1 metrics.go:72] Registering metrics
	I1124 14:03:29.687567       1 controller.go:711] "Syncing nftables rules"
	I1124 14:03:38.694573       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:03:38.694635       1 main.go:301] handling current node
	I1124 14:03:48.688498       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:03:48.688548       1 main.go:301] handling current node
	
	
	==> kube-apiserver [be628a67cb3edc8f555e0e4a52eb70c6cfbc1b59edfe16c9b0515c4976eefd13] <==
	I1124 14:02:48.317834       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 14:02:48.322311       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 14:02:48.362195       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 14:02:48.435618       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 14:02:48.462052       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 14:02:48.476837       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:02:48.538533       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:02:48.558711       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 14:02:48.946346       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 14:02:48.973136       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 14:02:48.974079       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:02:50.475026       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 14:02:50.569650       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 14:02:50.788078       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 14:02:50.799168       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1124 14:02:50.800958       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 14:02:50.821904       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:02:51.249509       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 14:02:51.864387       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 14:02:51.884230       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 14:02:51.908911       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 14:02:56.757961       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:02:56.774275       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:02:57.057851       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 14:02:57.407948       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [eb13e64310f2866a582c82705404d464b3ef8275165d8ff7ddf618f5224a962b] <==
	I1124 14:02:56.512434       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 14:02:56.512524       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 14:02:56.520716       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 14:02:56.528422       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:02:56.532707       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 14:02:56.545487       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 14:02:56.545548       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 14:02:56.545631       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 14:02:56.545705       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-609438"
	I1124 14:02:56.545741       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1124 14:02:56.545776       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 14:02:56.545806       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 14:02:56.545963       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 14:02:56.547854       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:02:56.557576       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 14:02:56.557819       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 14:02:56.557841       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 14:02:56.560419       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 14:02:56.585155       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:02:56.592148       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 14:02:56.592148       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 14:02:56.595983       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:02:56.596009       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 14:02:56.596017       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 14:03:41.552401       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [0d8cc01f3acbdf10be8708ac1417428a3f6e27d5d8157f32bd1a5668a144a05e] <==
	I1124 14:02:58.748852       1 server_linux.go:53] "Using iptables proxy"
	I1124 14:02:58.849738       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 14:02:58.952310       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 14:02:58.952351       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 14:02:58.952438       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 14:02:59.007296       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:02:59.007388       1 server_linux.go:132] "Using iptables Proxier"
	I1124 14:02:59.040244       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 14:02:59.040760       1 server.go:527] "Version info" version="v1.34.1"
	I1124 14:02:59.040789       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:02:59.046828       1 config.go:200] "Starting service config controller"
	I1124 14:02:59.046849       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 14:02:59.046877       1 config.go:106] "Starting endpoint slice config controller"
	I1124 14:02:59.046883       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 14:02:59.046909       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 14:02:59.046919       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 14:02:59.047701       1 config.go:309] "Starting node config controller"
	I1124 14:02:59.047722       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 14:02:59.047728       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 14:02:59.148034       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 14:02:59.148081       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 14:02:59.159995       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [459ad362844ecb08400a072a5a4113b697f5c8f001d2e3d39582353e18a4c77b] <==
	I1124 14:02:46.744784       1 serving.go:386] Generated self-signed cert in-memory
	I1124 14:02:51.010999       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 14:02:51.011038       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:02:51.016297       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1124 14:02:51.016551       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1124 14:02:51.016712       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:02:51.016805       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:02:51.016731       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 14:02:51.016753       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 14:02:51.016767       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:02:51.017340       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:02:51.119384       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1124 14:02:51.119506       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:02:51.119565       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 14:02:53 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:02:53.489729    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-609438" podStartSLOduration=0.489692096 podStartE2EDuration="489.692096ms" podCreationTimestamp="2025-11-24 14:02:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:02:53.468145965 +0000 UTC m=+1.657769133" watchObservedRunningTime="2025-11-24 14:02:53.489692096 +0000 UTC m=+1.679315263"
	Nov 24 14:02:53 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:02:53.519203    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-609438" podStartSLOduration=0.519183143 podStartE2EDuration="519.183143ms" podCreationTimestamp="2025-11-24 14:02:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:02:53.490183482 +0000 UTC m=+1.679806674" watchObservedRunningTime="2025-11-24 14:02:53.519183143 +0000 UTC m=+1.708806335"
	Nov 24 14:02:53 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:02:53.552307    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-609438" podStartSLOduration=0.552288297 podStartE2EDuration="552.288297ms" podCreationTimestamp="2025-11-24 14:02:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:02:53.519051531 +0000 UTC m=+1.708674723" watchObservedRunningTime="2025-11-24 14:02:53.552288297 +0000 UTC m=+1.741911464"
	Nov 24 14:02:56 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:02:56.554079    1473 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 14:02:56 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:02:56.555454    1473 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 14:02:57 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:02:57.388131    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92836c58-7b28-4b1b-838d-9491cd23823b-lib-modules\") pod \"kindnet-jcqb9\" (UID: \"92836c58-7b28-4b1b-838d-9491cd23823b\") " pod="kube-system/kindnet-jcqb9"
	Nov 24 14:02:57 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:02:57.388181    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8lr4\" (UniqueName: \"kubernetes.io/projected/92836c58-7b28-4b1b-838d-9491cd23823b-kube-api-access-t8lr4\") pod \"kindnet-jcqb9\" (UID: \"92836c58-7b28-4b1b-838d-9491cd23823b\") " pod="kube-system/kindnet-jcqb9"
	Nov 24 14:02:57 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:02:57.388207    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/814cc9f1-7449-431c-a35d-3ac3b4d05db9-kube-proxy\") pod \"kube-proxy-frlpg\" (UID: \"814cc9f1-7449-431c-a35d-3ac3b4d05db9\") " pod="kube-system/kube-proxy-frlpg"
	Nov 24 14:02:57 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:02:57.388225    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/814cc9f1-7449-431c-a35d-3ac3b4d05db9-xtables-lock\") pod \"kube-proxy-frlpg\" (UID: \"814cc9f1-7449-431c-a35d-3ac3b4d05db9\") " pod="kube-system/kube-proxy-frlpg"
	Nov 24 14:02:57 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:02:57.388242    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/814cc9f1-7449-431c-a35d-3ac3b4d05db9-lib-modules\") pod \"kube-proxy-frlpg\" (UID: \"814cc9f1-7449-431c-a35d-3ac3b4d05db9\") " pod="kube-system/kube-proxy-frlpg"
	Nov 24 14:02:57 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:02:57.388261    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/92836c58-7b28-4b1b-838d-9491cd23823b-cni-cfg\") pod \"kindnet-jcqb9\" (UID: \"92836c58-7b28-4b1b-838d-9491cd23823b\") " pod="kube-system/kindnet-jcqb9"
	Nov 24 14:02:57 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:02:57.388277    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lllv\" (UniqueName: \"kubernetes.io/projected/814cc9f1-7449-431c-a35d-3ac3b4d05db9-kube-api-access-5lllv\") pod \"kube-proxy-frlpg\" (UID: \"814cc9f1-7449-431c-a35d-3ac3b4d05db9\") " pod="kube-system/kube-proxy-frlpg"
	Nov 24 14:02:57 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:02:57.388298    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92836c58-7b28-4b1b-838d-9491cd23823b-xtables-lock\") pod \"kindnet-jcqb9\" (UID: \"92836c58-7b28-4b1b-838d-9491cd23823b\") " pod="kube-system/kindnet-jcqb9"
	Nov 24 14:02:57 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:02:57.567562    1473 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 24 14:02:59 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:02:59.480198    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-jcqb9" podStartSLOduration=2.480176019 podStartE2EDuration="2.480176019s" podCreationTimestamp="2025-11-24 14:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:02:58.497591111 +0000 UTC m=+6.687214303" watchObservedRunningTime="2025-11-24 14:02:59.480176019 +0000 UTC m=+7.669799186"
	Nov 24 14:03:02 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:03:02.316210    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-frlpg" podStartSLOduration=5.316187635 podStartE2EDuration="5.316187635s" podCreationTimestamp="2025-11-24 14:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:02:59.480370113 +0000 UTC m=+7.669993281" watchObservedRunningTime="2025-11-24 14:03:02.316187635 +0000 UTC m=+10.505810803"
	Nov 24 14:03:38 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:03:38.737042    1473 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 14:03:38 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:03:38.944808    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/98d7eb97-3a94-4904-9af3-f063689cec40-tmp\") pod \"storage-provisioner\" (UID: \"98d7eb97-3a94-4904-9af3-f063689cec40\") " pod="kube-system/storage-provisioner"
	Nov 24 14:03:38 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:03:38.944876    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpl6d\" (UniqueName: \"kubernetes.io/projected/98d7eb97-3a94-4904-9af3-f063689cec40-kube-api-access-hpl6d\") pod \"storage-provisioner\" (UID: \"98d7eb97-3a94-4904-9af3-f063689cec40\") " pod="kube-system/storage-provisioner"
	Nov 24 14:03:38 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:03:38.944900    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd79b5ea-c0d8-4b5c-94a4-6743b53cd9de-config-volume\") pod \"coredns-66bc5c9577-qctbs\" (UID: \"cd79b5ea-c0d8-4b5c-94a4-6743b53cd9de\") " pod="kube-system/coredns-66bc5c9577-qctbs"
	Nov 24 14:03:38 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:03:38.944920    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm8bp\" (UniqueName: \"kubernetes.io/projected/cd79b5ea-c0d8-4b5c-94a4-6743b53cd9de-kube-api-access-rm8bp\") pod \"coredns-66bc5c9577-qctbs\" (UID: \"cd79b5ea-c0d8-4b5c-94a4-6743b53cd9de\") " pod="kube-system/coredns-66bc5c9577-qctbs"
	Nov 24 14:03:39 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:03:39.600204    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=40.600171711 podStartE2EDuration="40.600171711s" podCreationTimestamp="2025-11-24 14:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:03:39.599491432 +0000 UTC m=+47.789114617" watchObservedRunningTime="2025-11-24 14:03:39.600171711 +0000 UTC m=+47.789794879"
	Nov 24 14:03:39 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:03:39.600442    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qctbs" podStartSLOduration=42.600434459 podStartE2EDuration="42.600434459s" podCreationTimestamp="2025-11-24 14:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:03:39.57748311 +0000 UTC m=+47.767106351" watchObservedRunningTime="2025-11-24 14:03:39.600434459 +0000 UTC m=+47.790057635"
	Nov 24 14:03:41 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:03:41.768952    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4fpk\" (UniqueName: \"kubernetes.io/projected/ad098064-4a27-4674-9c05-03b1e253a816-kube-api-access-d4fpk\") pod \"busybox\" (UID: \"ad098064-4a27-4674-9c05-03b1e253a816\") " pod="default/busybox"
	Nov 24 14:03:44 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:03:44.596618    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.37725709 podStartE2EDuration="3.596505334s" podCreationTimestamp="2025-11-24 14:03:41 +0000 UTC" firstStartedPulling="2025-11-24 14:03:42.030260153 +0000 UTC m=+50.219883329" lastFinishedPulling="2025-11-24 14:03:44.249508405 +0000 UTC m=+52.439131573" observedRunningTime="2025-11-24 14:03:44.596379466 +0000 UTC m=+52.786002642" watchObservedRunningTime="2025-11-24 14:03:44.596505334 +0000 UTC m=+52.786128518"
	
	
	==> storage-provisioner [00dfaea3cc3d9d681a16a40db39e8b36acc58147c4a4bcba29b9f0947732bc54] <==
	I1124 14:03:39.419369       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 14:03:39.434895       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 14:03:39.435178       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 14:03:39.437646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:39.444147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:03:39.444467       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 14:03:39.444850       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-609438_af2eb735-6513-4ee2-94f5-9fedff14594f!
	I1124 14:03:39.445626       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b412a730-d60a-41a8-bbbf-d1e5b5b11fb8", APIVersion:"v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-609438_af2eb735-6513-4ee2-94f5-9fedff14594f became leader
	W1124 14:03:39.451258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:39.457963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:03:39.545832       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-609438_af2eb735-6513-4ee2-94f5-9fedff14594f!
	W1124 14:03:41.461939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:41.467569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:43.470367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:43.477631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:45.482107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:45.488970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:47.492740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:47.499012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:49.502743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:49.509999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-609438 -n default-k8s-diff-port-609438
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-609438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-609438
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-609438:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e60e4efae158c5a5a7453ef33e59ec253543fb035754fa5f4e30943f9ec7969a",
	        "Created": "2025-11-24T14:02:23.924453268Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213017,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T14:02:24.041059545Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/e60e4efae158c5a5a7453ef33e59ec253543fb035754fa5f4e30943f9ec7969a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e60e4efae158c5a5a7453ef33e59ec253543fb035754fa5f4e30943f9ec7969a/hostname",
	        "HostsPath": "/var/lib/docker/containers/e60e4efae158c5a5a7453ef33e59ec253543fb035754fa5f4e30943f9ec7969a/hosts",
	        "LogPath": "/var/lib/docker/containers/e60e4efae158c5a5a7453ef33e59ec253543fb035754fa5f4e30943f9ec7969a/e60e4efae158c5a5a7453ef33e59ec253543fb035754fa5f4e30943f9ec7969a-json.log",
	        "Name": "/default-k8s-diff-port-609438",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-609438:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-609438",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e60e4efae158c5a5a7453ef33e59ec253543fb035754fa5f4e30943f9ec7969a",
	                "LowerDir": "/var/lib/docker/overlay2/af912c60810958b2495a8e05e93b587823eb87ecd651998279108ce95e60bdd1-init/diff:/var/lib/docker/overlay2/f206897dad0d7c6b66379aa7c75402ab98ba158a4fc5aedf84eda3d57da10430/diff",
	                "MergedDir": "/var/lib/docker/overlay2/af912c60810958b2495a8e05e93b587823eb87ecd651998279108ce95e60bdd1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/af912c60810958b2495a8e05e93b587823eb87ecd651998279108ce95e60bdd1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/af912c60810958b2495a8e05e93b587823eb87ecd651998279108ce95e60bdd1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-609438",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-609438/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-609438",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-609438",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-609438",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "92b3fe73ad5646614d6e8497cac5042fe28f99f96de535116de434d264224cc1",
	            "SandboxKey": "/var/run/docker/netns/92b3fe73ad56",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-609438": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:c0:c4:cd:26:9c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "18f87d422e57d01218d717420ae39221feb8c7f5806d615eefa583d8581f96bf",
	                    "EndpointID": "ece204dd69efe63eb7de38db0e784591e44a1308f3abc699a3f72a5774f87abc",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-609438",
	                        "e60e4efae158"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-609438 -n default-k8s-diff-port-609438
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-609438 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-609438 logs -n 25: (1.247230423s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ ssh     │ -p cilium-803934 sudo crio config                                                                                                                                                                                                                   │ cilium-803934                │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ delete  │ -p cilium-803934                                                                                                                                                                                                                                    │ cilium-803934                │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p force-systemd-env-134839 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-134839     │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p kubernetes-upgrade-758885                                                                                                                                                                                                                        │ kubernetes-upgrade-758885    │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p cert-expiration-865605 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-865605       │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:59 UTC │
	│ ssh     │ force-systemd-env-134839 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-134839     │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p force-systemd-env-134839                                                                                                                                                                                                                         │ force-systemd-env-134839     │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ start   │ -p cert-options-440754 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-440754          │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ ssh     │ cert-options-440754 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-440754          │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ ssh     │ -p cert-options-440754 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-440754          │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p cert-options-440754                                                                                                                                                                                                                              │ cert-options-440754          │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ start   │ -p old-k8s-version-318786 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 14:00 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-318786 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │ 24 Nov 25 14:00 UTC │
	│ stop    │ -p old-k8s-version-318786 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │ 24 Nov 25 14:01 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-318786 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ start   │ -p old-k8s-version-318786 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ image   │ old-k8s-version-318786 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:02 UTC │
	│ pause   │ -p old-k8s-version-318786 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:02 UTC │
	│ start   │ -p cert-expiration-865605 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-865605       │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:02 UTC │
	│ unpause │ -p old-k8s-version-318786 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:02 UTC │
	│ delete  │ -p old-k8s-version-318786                                                                                                                                                                                                                           │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:02 UTC │
	│ delete  │ -p old-k8s-version-318786                                                                                                                                                                                                                           │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:02 UTC │
	│ start   │ -p default-k8s-diff-port-609438 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-609438 │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:03 UTC │
	│ delete  │ -p cert-expiration-865605                                                                                                                                                                                                                           │ cert-expiration-865605       │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:02 UTC │
	│ start   │ -p embed-certs-593634 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-593634           │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:03 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:02:25
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:02:25.355768  213570 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:02:25.355897  213570 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:02:25.355929  213570 out.go:374] Setting ErrFile to fd 2...
	I1124 14:02:25.355935  213570 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:02:25.356214  213570 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2368/.minikube/bin
	I1124 14:02:25.356610  213570 out.go:368] Setting JSON to false
	I1124 14:02:25.357458  213570 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6294,"bootTime":1763986651,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1124 14:02:25.357531  213570 start.go:143] virtualization:  
	I1124 14:02:25.363130  213570 out.go:179] * [embed-certs-593634] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 14:02:25.366080  213570 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:02:25.366317  213570 notify.go:221] Checking for updates...
	I1124 14:02:25.371678  213570 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:02:25.374517  213570 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2368/kubeconfig
	I1124 14:02:25.377392  213570 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2368/.minikube
	I1124 14:02:25.380291  213570 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 14:02:25.383233  213570 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:02:25.386803  213570 config.go:182] Loaded profile config "default-k8s-diff-port-609438": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 14:02:25.386988  213570 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:02:25.428466  213570 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 14:02:25.428628  213570 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:02:25.551573  213570 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:55 SystemTime:2025-11-24 14:02:25.537516273 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:02:25.551683  213570 docker.go:319] overlay module found
	I1124 14:02:25.556682  213570 out.go:179] * Using the docker driver based on user configuration
	I1124 14:02:25.559709  213570 start.go:309] selected driver: docker
	I1124 14:02:25.559726  213570 start.go:927] validating driver "docker" against <nil>
	I1124 14:02:25.559738  213570 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:02:25.560805  213570 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:02:25.668193  213570 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-24 14:02:25.655788801 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:02:25.668344  213570 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 14:02:25.668552  213570 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:02:25.671717  213570 out.go:179] * Using Docker driver with root privileges
	I1124 14:02:25.674536  213570 cni.go:84] Creating CNI manager for ""
	I1124 14:02:25.674610  213570 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 14:02:25.674621  213570 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 14:02:25.674693  213570 start.go:353] cluster config:
	{Name:embed-certs-593634 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-593634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:02:25.677759  213570 out.go:179] * Starting "embed-certs-593634" primary control-plane node in "embed-certs-593634" cluster
	I1124 14:02:25.680596  213570 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 14:02:25.683549  213570 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 14:02:25.686518  213570 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 14:02:25.686579  213570 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1124 14:02:25.686594  213570 cache.go:65] Caching tarball of preloaded images
	I1124 14:02:25.686679  213570 preload.go:238] Found /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1124 14:02:25.686689  213570 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1124 14:02:25.686792  213570 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/config.json ...
	I1124 14:02:25.686808  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/config.json: {Name:mkcf0b417a9473ceb4b66956bfa520a43f4ebbeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:25.686945  213570 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 14:02:25.710900  213570 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 14:02:25.710919  213570 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 14:02:25.710933  213570 cache.go:240] Successfully downloaded all kic artifacts
	I1124 14:02:25.710962  213570 start.go:360] acquireMachinesLock for embed-certs-593634: {Name:mk435fa1f228450b1765e3435053e751c40a1834 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:02:25.711053  213570 start.go:364] duration metric: took 77.449µs to acquireMachinesLock for "embed-certs-593634"
	I1124 14:02:25.711077  213570 start.go:93] Provisioning new machine with config: &{Name:embed-certs-593634 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-593634 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 14:02:25.711153  213570 start.go:125] createHost starting for "" (driver="docker")
	I1124 14:02:23.909747  212383 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-609438 --name default-k8s-diff-port-609438 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-609438 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-609438 --network default-k8s-diff-port-609438 --ip 192.168.85.2 --volume default-k8s-diff-port-609438:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 14:02:24.307279  212383 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-609438 --format={{.State.Running}}
	I1124 14:02:24.327311  212383 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-609438 --format={{.State.Status}}
	I1124 14:02:24.369313  212383 cli_runner.go:164] Run: docker exec default-k8s-diff-port-609438 stat /var/lib/dpkg/alternatives/iptables
	I1124 14:02:24.459655  212383 oci.go:144] the created container "default-k8s-diff-port-609438" has a running status.
	I1124 14:02:24.459682  212383 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa...
	I1124 14:02:24.627125  212383 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 14:02:24.888609  212383 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-609438 --format={{.State.Status}}
	I1124 14:02:24.933748  212383 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 14:02:24.933772  212383 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-609438 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 14:02:25.043026  212383 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-609438 --format={{.State.Status}}
	I1124 14:02:25.089321  212383 machine.go:94] provisionDockerMachine start ...
	I1124 14:02:25.089431  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:25.153799  212383 main.go:143] libmachine: Using SSH client type: native
	I1124 14:02:25.154239  212383 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1124 14:02:25.154258  212383 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:02:25.461029  212383 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-609438
	
	I1124 14:02:25.461072  212383 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-609438"
	I1124 14:02:25.461152  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:25.543103  212383 main.go:143] libmachine: Using SSH client type: native
	I1124 14:02:25.543625  212383 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1124 14:02:25.543643  212383 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-609438 && echo "default-k8s-diff-port-609438" | sudo tee /etc/hostname
	I1124 14:02:25.773225  212383 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-609438
	
	I1124 14:02:25.773297  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:25.800013  212383 main.go:143] libmachine: Using SSH client type: native
	I1124 14:02:25.801080  212383 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1124 14:02:25.801108  212383 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-609438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-609438/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-609438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:02:26.006217  212383 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:02:26.006244  212383 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-2368/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-2368/.minikube}
	I1124 14:02:26.006263  212383 ubuntu.go:190] setting up certificates
	I1124 14:02:26.006272  212383 provision.go:84] configureAuth start
	I1124 14:02:26.006350  212383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-609438
	I1124 14:02:26.026909  212383 provision.go:143] copyHostCerts
	I1124 14:02:26.026970  212383 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem, removing ...
	I1124 14:02:26.026980  212383 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem
	I1124 14:02:26.027046  212383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem (1082 bytes)
	I1124 14:02:26.027134  212383 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem, removing ...
	I1124 14:02:26.027140  212383 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem
	I1124 14:02:26.027166  212383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem (1123 bytes)
	I1124 14:02:26.027243  212383 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem, removing ...
	I1124 14:02:26.027248  212383 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem
	I1124 14:02:26.027271  212383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem (1679 bytes)
	I1124 14:02:26.027316  212383 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-609438 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-609438 localhost minikube]
	I1124 14:02:26.479334  212383 provision.go:177] copyRemoteCerts
	I1124 14:02:26.479453  212383 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:02:26.479529  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:26.509970  212383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa Username:docker}
	I1124 14:02:26.633721  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 14:02:26.665930  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 14:02:26.697677  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1124 14:02:26.732905  212383 provision.go:87] duration metric: took 726.609261ms to configureAuth
	I1124 14:02:26.732938  212383 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:02:26.733137  212383 config.go:182] Loaded profile config "default-k8s-diff-port-609438": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 14:02:26.733153  212383 machine.go:97] duration metric: took 1.643811371s to provisionDockerMachine
	I1124 14:02:26.733161  212383 client.go:176] duration metric: took 7.487822203s to LocalClient.Create
	I1124 14:02:26.733175  212383 start.go:167] duration metric: took 7.487885367s to libmachine.API.Create "default-k8s-diff-port-609438"
	I1124 14:02:26.733189  212383 start.go:293] postStartSetup for "default-k8s-diff-port-609438" (driver="docker")
	I1124 14:02:26.733198  212383 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:02:26.733271  212383 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:02:26.733323  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:26.763570  212383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa Username:docker}
	I1124 14:02:26.897119  212383 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:02:26.901182  212383 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:02:26.901211  212383 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:02:26.901223  212383 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2368/.minikube/addons for local assets ...
	I1124 14:02:26.901281  212383 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2368/.minikube/files for local assets ...
	I1124 14:02:26.901360  212383 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem -> 41782.pem in /etc/ssl/certs
	I1124 14:02:26.901463  212383 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:02:26.909763  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem --> /etc/ssl/certs/41782.pem (1708 bytes)
	I1124 14:02:26.930128  212383 start.go:296] duration metric: took 196.924439ms for postStartSetup
	I1124 14:02:26.930508  212383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-609438
	I1124 14:02:26.950744  212383 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/config.json ...
	I1124 14:02:26.951035  212383 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 14:02:26.951091  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:26.973535  212383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa Username:docker}
	I1124 14:02:27.077778  212383 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 14:02:27.083066  212383 start.go:128] duration metric: took 7.841363739s to createHost
	I1124 14:02:27.083089  212383 start.go:83] releasing machines lock for "default-k8s-diff-port-609438", held for 7.84148292s
	I1124 14:02:27.083163  212383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-609438
	I1124 14:02:27.105539  212383 ssh_runner.go:195] Run: cat /version.json
	I1124 14:02:27.105585  212383 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:02:27.105661  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:27.105589  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:27.149461  212383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa Username:docker}
	I1124 14:02:27.157732  212383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa Username:docker}
	I1124 14:02:27.367320  212383 ssh_runner.go:195] Run: systemctl --version
	I1124 14:02:27.374447  212383 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:02:27.380473  212383 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:02:27.380647  212383 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:02:27.413935  212383 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 14:02:27.414007  212383 start.go:496] detecting cgroup driver to use...
	I1124 14:02:27.414056  212383 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 14:02:27.414133  212383 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 14:02:27.430159  212383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 14:02:27.444285  212383 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:02:27.444392  212383 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:02:27.461944  212383 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:02:27.481645  212383 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:02:27.639351  212383 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:02:27.799286  212383 docker.go:234] disabling docker service ...
	I1124 14:02:27.799350  212383 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:02:27.831375  212383 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:02:27.845484  212383 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:02:27.983498  212383 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:02:28.133537  212383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:02:28.150716  212383 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:02:28.166057  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 14:02:28.175128  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 14:02:28.184145  212383 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1124 14:02:28.184265  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1124 14:02:28.192987  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 14:02:28.202626  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 14:02:28.211553  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 14:02:28.220020  212383 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:02:28.228018  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 14:02:28.236891  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 14:02:28.245507  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 14:02:28.254226  212383 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:02:28.262068  212383 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:02:28.269803  212383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:02:28.442896  212383 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 14:02:28.596361  212383 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 14:02:28.596444  212383 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 14:02:28.602936  212383 start.go:564] Will wait 60s for crictl version
	I1124 14:02:28.603014  212383 ssh_runner.go:195] Run: which crictl
	I1124 14:02:28.607012  212383 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 14:02:28.645174  212383 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 14:02:28.645247  212383 ssh_runner.go:195] Run: containerd --version
	I1124 14:02:28.669934  212383 ssh_runner.go:195] Run: containerd --version
	I1124 14:02:28.700929  212383 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1124 14:02:28.704729  212383 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-609438 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:02:28.734893  212383 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 14:02:28.738862  212383 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:02:28.749508  212383 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-609438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-609438 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:02:28.749613  212383 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 14:02:28.749681  212383 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:02:28.782633  212383 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 14:02:28.782654  212383 containerd.go:534] Images already preloaded, skipping extraction
	I1124 14:02:28.782711  212383 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:02:28.839126  212383 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 14:02:28.839147  212383 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:02:28.839155  212383 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 containerd true true} ...
	I1124 14:02:28.839244  212383 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-609438 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-609438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 14:02:28.839314  212383 ssh_runner.go:195] Run: sudo crictl info
	I1124 14:02:28.874904  212383 cni.go:84] Creating CNI manager for ""
	I1124 14:02:28.874924  212383 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 14:02:28.874940  212383 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 14:02:28.874963  212383 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-609438 NodeName:default-k8s-diff-port-609438 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:02:28.875085  212383 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-609438"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:02:28.875154  212383 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 14:02:28.884597  212383 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:02:28.884669  212383 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:02:25.714459  213570 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 14:02:25.714725  213570 start.go:159] libmachine.API.Create for "embed-certs-593634" (driver="docker")
	I1124 14:02:25.714819  213570 client.go:173] LocalClient.Create starting
	I1124 14:02:25.714954  213570 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem
	I1124 14:02:25.715008  213570 main.go:143] libmachine: Decoding PEM data...
	I1124 14:02:25.715051  213570 main.go:143] libmachine: Parsing certificate...
	I1124 14:02:25.715148  213570 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem
	I1124 14:02:25.715206  213570 main.go:143] libmachine: Decoding PEM data...
	I1124 14:02:25.715261  213570 main.go:143] libmachine: Parsing certificate...
	I1124 14:02:25.715745  213570 cli_runner.go:164] Run: docker network inspect embed-certs-593634 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 14:02:25.736780  213570 cli_runner.go:211] docker network inspect embed-certs-593634 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 14:02:25.736871  213570 network_create.go:284] running [docker network inspect embed-certs-593634] to gather additional debugging logs...
	I1124 14:02:25.736888  213570 cli_runner.go:164] Run: docker network inspect embed-certs-593634
	W1124 14:02:25.769114  213570 cli_runner.go:211] docker network inspect embed-certs-593634 returned with exit code 1
	I1124 14:02:25.769141  213570 network_create.go:287] error running [docker network inspect embed-certs-593634]: docker network inspect embed-certs-593634: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-593634 not found
	I1124 14:02:25.769154  213570 network_create.go:289] output of [docker network inspect embed-certs-593634]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-593634 not found
	
	** /stderr **
	I1124 14:02:25.769257  213570 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:02:25.800766  213570 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e5e15b13860d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:3d:37:c4:cc:77} reservation:<nil>}
	I1124 14:02:25.801103  213570 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-66593a990bce IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:c0:9b:bc:41:ca} reservation:<nil>}
	I1124 14:02:25.801995  213570 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-37e9fb0954cf IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:32:0b:6f:6e:b2:8c} reservation:<nil>}
	I1124 14:02:25.802424  213570 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e9170}
	I1124 14:02:25.802442  213570 network_create.go:124] attempt to create docker network embed-certs-593634 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1124 14:02:25.802493  213570 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-593634 embed-certs-593634
	I1124 14:02:25.881093  213570 network_create.go:108] docker network embed-certs-593634 192.168.76.0/24 created
	I1124 14:02:25.881122  213570 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-593634" container
	I1124 14:02:25.881203  213570 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 14:02:25.903081  213570 cli_runner.go:164] Run: docker volume create embed-certs-593634 --label name.minikube.sigs.k8s.io=embed-certs-593634 --label created_by.minikube.sigs.k8s.io=true
	I1124 14:02:25.931462  213570 oci.go:103] Successfully created a docker volume embed-certs-593634
	I1124 14:02:25.931542  213570 cli_runner.go:164] Run: docker run --rm --name embed-certs-593634-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-593634 --entrypoint /usr/bin/test -v embed-certs-593634:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 14:02:26.581166  213570 oci.go:107] Successfully prepared a docker volume embed-certs-593634
	I1124 14:02:26.581232  213570 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 14:02:26.581244  213570 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 14:02:26.581311  213570 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-593634:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 14:02:28.894421  212383 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1124 14:02:28.909480  212383 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:02:28.924519  212383 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2241 bytes)
	I1124 14:02:28.939585  212383 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 14:02:28.943813  212383 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:02:28.954534  212383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:02:29.104027  212383 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:02:29.125453  212383 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438 for IP: 192.168.85.2
	I1124 14:02:29.125476  212383 certs.go:195] generating shared ca certs ...
	I1124 14:02:29.125503  212383 certs.go:227] acquiring lock for ca certs: {Name:mkcd8707c782acde0e57168c044a3df942dc4ab1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:29.125641  212383 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-2368/.minikube/ca.key
	I1124 14:02:29.125695  212383 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.key
	I1124 14:02:29.125707  212383 certs.go:257] generating profile certs ...
	I1124 14:02:29.125768  212383 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/client.key
	I1124 14:02:29.125789  212383 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/client.crt with IP's: []
	I1124 14:02:29.324459  212383 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/client.crt ...
	I1124 14:02:29.324491  212383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/client.crt: {Name:mk8aada29dd487d5091685276369440b7d624321 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:29.324640  212383 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/client.key ...
	I1124 14:02:29.324656  212383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/client.key: {Name:mka039edce6f440d55864b8259b2b6e6a4166f35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:29.324742  212383 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.key.0b070d75
	I1124 14:02:29.324762  212383 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.crt.0b070d75 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 14:02:29.388053  212383 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.crt.0b070d75 ...
	I1124 14:02:29.388089  212383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.crt.0b070d75: {Name:mk8c33f3dd28832381eccdbc39352bbcf3fad513 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:29.388234  212383 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.key.0b070d75 ...
	I1124 14:02:29.388250  212383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.key.0b070d75: {Name:mk1a2d7229ced6b28d71658195699ecc4e6d6cbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:29.388323  212383 certs.go:382] copying /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.crt.0b070d75 -> /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.crt
	I1124 14:02:29.388407  212383 certs.go:386] copying /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.key.0b070d75 -> /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.key
	I1124 14:02:29.388467  212383 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.key
	I1124 14:02:29.388494  212383 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.crt with IP's: []
	I1124 14:02:29.607942  212383 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.crt ...
	I1124 14:02:29.607978  212383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.crt: {Name:mkf0227a8560a7238360c53d12e60293f9779f1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:29.608133  212383 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.key ...
	I1124 14:02:29.608148  212383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.key: {Name:mkdb69944b7ff660a91a53e6ae6208e817233479 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:29.608326  212383 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178.pem (1338 bytes)
	W1124 14:02:29.608368  212383 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178_empty.pem, impossibly tiny 0 bytes
	I1124 14:02:29.608383  212383 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 14:02:29.608412  212383 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem (1082 bytes)
	I1124 14:02:29.608442  212383 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:02:29.608468  212383 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem (1679 bytes)
	I1124 14:02:29.608515  212383 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem (1708 bytes)
	I1124 14:02:29.609076  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:02:29.626013  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 14:02:29.643798  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:02:29.661375  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 14:02:29.679743  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1124 14:02:29.696528  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 14:02:29.728013  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:02:29.773516  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 14:02:29.805187  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178.pem --> /usr/share/ca-certificates/4178.pem (1338 bytes)
	I1124 14:02:29.826865  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem --> /usr/share/ca-certificates/41782.pem (1708 bytes)
	I1124 14:02:29.847529  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:02:29.867886  212383 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:02:29.882919  212383 ssh_runner.go:195] Run: openssl version
	I1124 14:02:29.889477  212383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41782.pem && ln -fs /usr/share/ca-certificates/41782.pem /etc/ssl/certs/41782.pem"
	I1124 14:02:29.898302  212383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41782.pem
	I1124 14:02:29.904667  212383 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:21 /usr/share/ca-certificates/41782.pem
	I1124 14:02:29.904736  212383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41782.pem
	I1124 14:02:29.948420  212383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:02:29.957558  212383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:02:29.966733  212383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:02:29.970899  212383 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:02:29.970989  212383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:02:30.019996  212383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:02:30.030890  212383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4178.pem && ln -fs /usr/share/ca-certificates/4178.pem /etc/ssl/certs/4178.pem"
	I1124 14:02:30.057890  212383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4178.pem
	I1124 14:02:30.080661  212383 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:21 /usr/share/ca-certificates/4178.pem
	I1124 14:02:30.080813  212383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4178.pem
	I1124 14:02:30.155115  212383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4178.pem /etc/ssl/certs/51391683.0"
	I1124 14:02:30.165475  212383 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:02:30.170978  212383 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 14:02:30.171035  212383 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-609438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-609438 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:02:30.171124  212383 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 14:02:30.171192  212383 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:02:30.211462  212383 cri.go:89] found id: ""
	I1124 14:02:30.211552  212383 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:02:30.226907  212383 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 14:02:30.236649  212383 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 14:02:30.236720  212383 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 14:02:30.248370  212383 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 14:02:30.248462  212383 kubeadm.go:158] found existing configuration files:
	
	I1124 14:02:30.248548  212383 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1124 14:02:30.262084  212383 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 14:02:30.262152  212383 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 14:02:30.270330  212383 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1124 14:02:30.279476  212383 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 14:02:30.279543  212383 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 14:02:30.288703  212383 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1124 14:02:30.297950  212383 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 14:02:30.298023  212383 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 14:02:30.310718  212383 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1124 14:02:30.320531  212383 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 14:02:30.320603  212383 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 14:02:30.329639  212383 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 14:02:30.406424  212383 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 14:02:30.406661  212383 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 14:02:30.479025  212383 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 14:02:31.562417  213570 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-593634:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.981062358s)
	I1124 14:02:31.562447  213570 kic.go:203] duration metric: took 4.981201018s to extract preloaded images to volume ...
	W1124 14:02:31.562585  213570 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 14:02:31.562696  213570 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 14:02:31.653956  213570 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-593634 --name embed-certs-593634 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-593634 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-593634 --network embed-certs-593634 --ip 192.168.76.2 --volume embed-certs-593634:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 14:02:32.104099  213570 cli_runner.go:164] Run: docker container inspect embed-certs-593634 --format={{.State.Running}}
	I1124 14:02:32.133617  213570 cli_runner.go:164] Run: docker container inspect embed-certs-593634 --format={{.State.Status}}
	I1124 14:02:32.170125  213570 cli_runner.go:164] Run: docker exec embed-certs-593634 stat /var/lib/dpkg/alternatives/iptables
	I1124 14:02:32.243591  213570 oci.go:144] the created container "embed-certs-593634" has a running status.
	I1124 14:02:32.243619  213570 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa...
	I1124 14:02:33.008353  213570 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 14:02:33.030437  213570 cli_runner.go:164] Run: docker container inspect embed-certs-593634 --format={{.State.Status}}
	I1124 14:02:33.051118  213570 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 14:02:33.051142  213570 kic_runner.go:114] Args: [docker exec --privileged embed-certs-593634 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 14:02:33.146272  213570 cli_runner.go:164] Run: docker container inspect embed-certs-593634 --format={{.State.Status}}
	I1124 14:02:33.172981  213570 machine.go:94] provisionDockerMachine start ...
	I1124 14:02:33.173175  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:33.203273  213570 main.go:143] libmachine: Using SSH client type: native
	I1124 14:02:33.203611  213570 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1124 14:02:33.203620  213570 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:02:33.204370  213570 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 14:02:36.376430  213570 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-593634
	
	I1124 14:02:36.376458  213570 ubuntu.go:182] provisioning hostname "embed-certs-593634"
	I1124 14:02:36.376538  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:36.401139  213570 main.go:143] libmachine: Using SSH client type: native
	I1124 14:02:36.401453  213570 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1124 14:02:36.401469  213570 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-593634 && echo "embed-certs-593634" | sudo tee /etc/hostname
	I1124 14:02:36.589650  213570 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-593634
	
	I1124 14:02:36.589799  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:36.618006  213570 main.go:143] libmachine: Using SSH client type: native
	I1124 14:02:36.618310  213570 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1124 14:02:36.618326  213570 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-593634' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-593634/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-593634' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:02:36.779947  213570 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:02:36.780024  213570 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-2368/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-2368/.minikube}
	I1124 14:02:36.780065  213570 ubuntu.go:190] setting up certificates
	I1124 14:02:36.780107  213570 provision.go:84] configureAuth start
	I1124 14:02:36.780202  213570 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-593634
	I1124 14:02:36.805555  213570 provision.go:143] copyHostCerts
	I1124 14:02:36.805621  213570 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem, removing ...
	I1124 14:02:36.805629  213570 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem
	I1124 14:02:36.805706  213570 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem (1082 bytes)
	I1124 14:02:36.805804  213570 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem, removing ...
	I1124 14:02:36.805809  213570 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem
	I1124 14:02:36.805834  213570 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem (1123 bytes)
	I1124 14:02:36.805881  213570 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem, removing ...
	I1124 14:02:36.805885  213570 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem
	I1124 14:02:36.805907  213570 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem (1679 bytes)
	I1124 14:02:36.805955  213570 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem org=jenkins.embed-certs-593634 san=[127.0.0.1 192.168.76.2 embed-certs-593634 localhost minikube]
	I1124 14:02:37.074442  213570 provision.go:177] copyRemoteCerts
	I1124 14:02:37.074519  213570 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:02:37.074565  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:37.105113  213570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa Username:docker}
	I1124 14:02:37.228963  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 14:02:37.249359  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 14:02:37.269580  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1124 14:02:37.289369  213570 provision.go:87] duration metric: took 509.223197ms to configureAuth
	I1124 14:02:37.289401  213570 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:02:37.289587  213570 config.go:182] Loaded profile config "embed-certs-593634": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 14:02:37.289602  213570 machine.go:97] duration metric: took 4.11660352s to provisionDockerMachine
	I1124 14:02:37.289609  213570 client.go:176] duration metric: took 11.57476669s to LocalClient.Create
	I1124 14:02:37.289629  213570 start.go:167] duration metric: took 11.574903397s to libmachine.API.Create "embed-certs-593634"
	I1124 14:02:37.289636  213570 start.go:293] postStartSetup for "embed-certs-593634" (driver="docker")
	I1124 14:02:37.289644  213570 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:02:37.289700  213570 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:02:37.289746  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:37.313497  213570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa Username:docker}
	I1124 14:02:37.421261  213570 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:02:37.425376  213570 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:02:37.425402  213570 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:02:37.425413  213570 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2368/.minikube/addons for local assets ...
	I1124 14:02:37.425467  213570 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2368/.minikube/files for local assets ...
	I1124 14:02:37.425546  213570 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem -> 41782.pem in /etc/ssl/certs
	I1124 14:02:37.425648  213570 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:02:37.434170  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem --> /etc/ssl/certs/41782.pem (1708 bytes)
	I1124 14:02:37.454297  213570 start.go:296] duration metric: took 164.646825ms for postStartSetup
	I1124 14:02:37.454768  213570 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-593634
	I1124 14:02:37.473090  213570 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/config.json ...
	I1124 14:02:37.473375  213570 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 14:02:37.473419  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:37.492467  213570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa Username:docker}
	I1124 14:02:37.597996  213570 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 14:02:37.603374  213570 start.go:128] duration metric: took 11.892207017s to createHost
	I1124 14:02:37.603402  213570 start.go:83] releasing machines lock for "embed-certs-593634", held for 11.892340336s
	I1124 14:02:37.603491  213570 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-593634
	I1124 14:02:37.622681  213570 ssh_runner.go:195] Run: cat /version.json
	I1124 14:02:37.622739  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:37.622988  213570 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:02:37.623049  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:37.653121  213570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa Username:docker}
	I1124 14:02:37.661266  213570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa Username:docker}
	I1124 14:02:37.867529  213570 ssh_runner.go:195] Run: systemctl --version
	I1124 14:02:37.880289  213570 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:02:37.885513  213570 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:02:37.885586  213570 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:02:37.919967  213570 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 14:02:37.920041  213570 start.go:496] detecting cgroup driver to use...
	I1124 14:02:37.920090  213570 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 14:02:37.920196  213570 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 14:02:37.939855  213570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 14:02:37.954765  213570 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:02:37.954832  213570 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:02:37.973211  213570 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:02:37.993531  213570 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:02:38.152217  213570 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:02:38.315244  213570 docker.go:234] disabling docker service ...
	I1124 14:02:38.315315  213570 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:02:38.342606  213570 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:02:38.357435  213570 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:02:38.501143  213570 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:02:38.653968  213570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:02:38.670062  213570 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:02:38.691612  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 14:02:38.701736  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 14:02:38.711955  213570 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1124 14:02:38.712108  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1124 14:02:38.722429  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 14:02:38.732416  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 14:02:38.742370  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 14:02:38.752386  213570 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:02:38.761548  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 14:02:38.771322  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 14:02:38.781079  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 14:02:38.790804  213570 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:02:38.799605  213570 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:02:38.808384  213570 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:02:38.957014  213570 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 14:02:39.134468  213570 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 14:02:39.134589  213570 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 14:02:39.138612  213570 start.go:564] Will wait 60s for crictl version
	I1124 14:02:39.138728  213570 ssh_runner.go:195] Run: which crictl
	I1124 14:02:39.142835  213570 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 14:02:39.183049  213570 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 14:02:39.183127  213570 ssh_runner.go:195] Run: containerd --version
	I1124 14:02:39.209644  213570 ssh_runner.go:195] Run: containerd --version
	I1124 14:02:39.242563  213570 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1124 14:02:39.245632  213570 cli_runner.go:164] Run: docker network inspect embed-certs-593634 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:02:39.261116  213570 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 14:02:39.265349  213570 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:02:39.275060  213570 kubeadm.go:884] updating cluster {Name:embed-certs-593634 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-593634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:02:39.275179  213570 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 14:02:39.275240  213570 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:02:39.309584  213570 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 14:02:39.309604  213570 containerd.go:534] Images already preloaded, skipping extraction
	I1124 14:02:39.309666  213570 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:02:39.338298  213570 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 14:02:39.338369  213570 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:02:39.338391  213570 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1124 14:02:39.338540  213570 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-593634 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-593634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 14:02:39.338638  213570 ssh_runner.go:195] Run: sudo crictl info
	I1124 14:02:39.374509  213570 cni.go:84] Creating CNI manager for ""
	I1124 14:02:39.374529  213570 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 14:02:39.374546  213570 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 14:02:39.374567  213570 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-593634 NodeName:embed-certs-593634 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:02:39.374695  213570 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-593634"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:02:39.374758  213570 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 14:02:39.383722  213570 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:02:39.383790  213570 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:02:39.392664  213570 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1124 14:02:39.407366  213570 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:02:39.421539  213570 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1124 14:02:39.435750  213570 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 14:02:39.439949  213570 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:02:39.450067  213570 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:02:39.594389  213570 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:02:39.612637  213570 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634 for IP: 192.168.76.2
	I1124 14:02:39.612654  213570 certs.go:195] generating shared ca certs ...
	I1124 14:02:39.612670  213570 certs.go:227] acquiring lock for ca certs: {Name:mkcd8707c782acde0e57168c044a3df942dc4ab1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:39.612812  213570 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-2368/.minikube/ca.key
	I1124 14:02:39.612861  213570 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.key
	I1124 14:02:39.612868  213570 certs.go:257] generating profile certs ...
	I1124 14:02:39.612921  213570 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/client.key
	I1124 14:02:39.612933  213570 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/client.crt with IP's: []
	I1124 14:02:39.743608  213570 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/client.crt ...
	I1124 14:02:39.743688  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/client.crt: {Name:mkdc127047d7bba99c4ff0de010fa76eaa96351a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:39.743978  213570 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/client.key ...
	I1124 14:02:39.744016  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/client.key: {Name:mk5b65ad154f9ff1864bd2678d53c0d49d42b626 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:39.744181  213570 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.key.20c14e55
	I1124 14:02:39.744223  213570 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.crt.20c14e55 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 14:02:39.792416  213570 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.crt.20c14e55 ...
	I1124 14:02:39.792488  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.crt.20c14e55: {Name:mk898939d3f887dee7ec2cb55d4f9f3c1473f371 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:39.792715  213570 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.key.20c14e55 ...
	I1124 14:02:39.792751  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.key.20c14e55: {Name:mk7634950b7d8fc2f57ae8ad6d2b71e2a24db521 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:39.792893  213570 certs.go:382] copying /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.crt.20c14e55 -> /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.crt
	I1124 14:02:39.793035  213570 certs.go:386] copying /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.key.20c14e55 -> /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.key
	I1124 14:02:39.793197  213570 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.key
	I1124 14:02:39.793218  213570 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.crt with IP's: []
	I1124 14:02:40.512550  213570 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.crt ...
	I1124 14:02:40.512590  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.crt: {Name:mk7e59e3c705bb60e30918ea8dec355fb87a4cdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:40.512783  213570 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.key ...
	I1124 14:02:40.512800  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.key: {Name:mk1c28b0bf985e63e205a9d607bdda54b666c8d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:40.512994  213570 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178.pem (1338 bytes)
	W1124 14:02:40.513046  213570 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178_empty.pem, impossibly tiny 0 bytes
	I1124 14:02:40.513055  213570 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 14:02:40.513084  213570 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem (1082 bytes)
	I1124 14:02:40.513116  213570 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:02:40.513155  213570 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem (1679 bytes)
	I1124 14:02:40.513205  213570 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem (1708 bytes)
	I1124 14:02:40.513807  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:02:40.534476  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 14:02:40.554772  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:02:40.573041  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 14:02:40.592563  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 14:02:40.610272  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 14:02:40.648106  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:02:40.675421  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 14:02:40.712861  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:02:40.741274  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178.pem --> /usr/share/ca-certificates/4178.pem (1338 bytes)
	I1124 14:02:40.775540  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem --> /usr/share/ca-certificates/41782.pem (1708 bytes)
	I1124 14:02:40.810151  213570 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:02:40.834734  213570 ssh_runner.go:195] Run: openssl version
	I1124 14:02:40.841134  213570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4178.pem && ln -fs /usr/share/ca-certificates/4178.pem /etc/ssl/certs/4178.pem"
	I1124 14:02:40.853029  213570 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4178.pem
	I1124 14:02:40.860558  213570 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:21 /usr/share/ca-certificates/4178.pem
	I1124 14:02:40.860626  213570 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4178.pem
	I1124 14:02:40.918401  213570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4178.pem /etc/ssl/certs/51391683.0"
	I1124 14:02:40.928700  213570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41782.pem && ln -fs /usr/share/ca-certificates/41782.pem /etc/ssl/certs/41782.pem"
	I1124 14:02:40.943881  213570 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41782.pem
	I1124 14:02:40.948767  213570 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:21 /usr/share/ca-certificates/41782.pem
	I1124 14:02:40.948833  213570 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41782.pem
	I1124 14:02:41.014703  213570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:02:41.026160  213570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:02:41.039512  213570 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:02:41.046666  213570 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:02:41.046734  213570 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:02:41.111180  213570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:02:41.121762  213570 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:02:41.128022  213570 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 14:02:41.128075  213570 kubeadm.go:401] StartCluster: {Name:embed-certs-593634 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-593634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:02:41.128164  213570 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 14:02:41.128228  213570 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:02:41.181954  213570 cri.go:89] found id: ""
	I1124 14:02:41.182043  213570 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:02:41.192535  213570 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 14:02:41.201483  213570 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 14:02:41.201548  213570 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 14:02:41.210919  213570 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 14:02:41.210940  213570 kubeadm.go:158] found existing configuration files:
	
	I1124 14:02:41.210999  213570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 14:02:41.223268  213570 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 14:02:41.223332  213570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 14:02:41.239377  213570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 14:02:41.251095  213570 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 14:02:41.251165  213570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 14:02:41.259252  213570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 14:02:41.268559  213570 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 14:02:41.268620  213570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 14:02:41.282438  213570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 14:02:41.293894  213570 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 14:02:41.293975  213570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 14:02:41.321578  213570 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 14:02:41.440101  213570 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 14:02:41.445250  213570 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 14:02:41.492866  213570 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 14:02:41.499280  213570 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 14:02:41.499334  213570 kubeadm.go:319] OS: Linux
	I1124 14:02:41.499382  213570 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 14:02:41.499444  213570 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 14:02:41.499504  213570 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 14:02:41.499557  213570 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 14:02:41.499612  213570 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 14:02:41.499666  213570 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 14:02:41.499716  213570 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 14:02:41.499769  213570 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 14:02:41.499820  213570 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 14:02:41.625341  213570 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 14:02:41.625456  213570 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 14:02:41.625558  213570 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 14:02:41.636268  213570 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 14:02:41.641768  213570 out.go:252]   - Generating certificates and keys ...
	I1124 14:02:41.641865  213570 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 14:02:41.641939  213570 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 14:02:42.619223  213570 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 14:02:43.011953  213570 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 14:02:43.483393  213570 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 14:02:43.810126  213570 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 14:02:44.825951  213570 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 14:02:44.828294  213570 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-593634 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 14:02:45.647118  213570 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 14:02:45.647643  213570 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-593634 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 14:02:45.905141  213570 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 14:02:46.000202  213570 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 14:02:46.120215  213570 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 14:02:46.120734  213570 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 14:02:46.900838  213570 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 14:02:47.805102  213570 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 14:02:48.517833  213570 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 14:02:49.348256  213570 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 14:02:49.516941  213570 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 14:02:49.518037  213570 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 14:02:49.520983  213570 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 14:02:49.523689  213570 out.go:252]   - Booting up control plane ...
	I1124 14:02:49.523845  213570 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 14:02:49.523973  213570 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 14:02:49.525837  213570 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 14:02:49.554261  213570 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 14:02:49.554370  213570 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 14:02:49.565946  213570 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 14:02:49.567436  213570 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 14:02:49.571311  213570 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 14:02:49.806053  213570 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 14:02:49.806172  213570 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 14:02:52.457159  212383 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 14:02:52.457215  212383 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 14:02:52.457303  212383 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 14:02:52.457359  212383 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 14:02:52.457393  212383 kubeadm.go:319] OS: Linux
	I1124 14:02:52.457438  212383 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 14:02:52.457486  212383 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 14:02:52.457532  212383 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 14:02:52.457580  212383 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 14:02:52.457628  212383 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 14:02:52.457682  212383 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 14:02:52.457728  212383 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 14:02:52.457775  212383 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 14:02:52.457821  212383 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 14:02:52.457893  212383 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 14:02:52.457987  212383 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 14:02:52.458077  212383 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 14:02:52.458138  212383 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 14:02:52.461386  212383 out.go:252]   - Generating certificates and keys ...
	I1124 14:02:52.461491  212383 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 14:02:52.461556  212383 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 14:02:52.461623  212383 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 14:02:52.461680  212383 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 14:02:52.461741  212383 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 14:02:52.461791  212383 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 14:02:52.461845  212383 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 14:02:52.461977  212383 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-609438 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 14:02:52.462028  212383 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 14:02:52.462157  212383 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-609438 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 14:02:52.462223  212383 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 14:02:52.462287  212383 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 14:02:52.462339  212383 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 14:02:52.462402  212383 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 14:02:52.462458  212383 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 14:02:52.462521  212383 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 14:02:52.462611  212383 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 14:02:52.462674  212383 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 14:02:52.462729  212383 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 14:02:52.462820  212383 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 14:02:52.462893  212383 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 14:02:52.465845  212383 out.go:252]   - Booting up control plane ...
	I1124 14:02:52.466035  212383 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 14:02:52.466163  212383 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 14:02:52.466242  212383 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 14:02:52.466364  212383 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 14:02:52.466465  212383 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 14:02:52.466577  212383 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 14:02:52.466668  212383 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 14:02:52.466709  212383 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 14:02:52.466848  212383 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 14:02:52.466960  212383 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 14:02:52.467024  212383 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.018392479s
	I1124 14:02:52.467123  212383 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 14:02:52.467209  212383 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1124 14:02:52.467305  212383 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 14:02:52.467389  212383 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 14:02:52.467470  212383 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.741501846s
	I1124 14:02:52.467552  212383 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.503243598s
	I1124 14:02:52.467627  212383 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 10.824874472s
	I1124 14:02:52.467741  212383 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 14:02:52.467875  212383 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 14:02:52.467955  212383 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 14:02:52.468176  212383 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-609438 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 14:02:52.468237  212383 kubeadm.go:319] [bootstrap-token] Using token: vzq4ay.serxkml6gk1378wv
	I1124 14:02:52.471358  212383 out.go:252]   - Configuring RBAC rules ...
	I1124 14:02:52.471499  212383 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 14:02:52.471591  212383 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 14:02:52.471743  212383 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 14:02:52.471880  212383 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 14:02:52.472017  212383 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 14:02:52.472112  212383 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 14:02:52.472236  212383 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 14:02:52.472282  212383 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 14:02:52.472331  212383 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 14:02:52.472335  212383 kubeadm.go:319] 
	I1124 14:02:52.472400  212383 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 14:02:52.472411  212383 kubeadm.go:319] 
	I1124 14:02:52.472495  212383 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 14:02:52.472499  212383 kubeadm.go:319] 
	I1124 14:02:52.472526  212383 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 14:02:52.472589  212383 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 14:02:52.472643  212383 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 14:02:52.472647  212383 kubeadm.go:319] 
	I1124 14:02:52.472705  212383 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 14:02:52.472709  212383 kubeadm.go:319] 
	I1124 14:02:52.472759  212383 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 14:02:52.472763  212383 kubeadm.go:319] 
	I1124 14:02:52.472819  212383 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 14:02:52.472899  212383 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 14:02:52.472973  212383 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 14:02:52.472976  212383 kubeadm.go:319] 
	I1124 14:02:52.473067  212383 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 14:02:52.473150  212383 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 14:02:52.473154  212383 kubeadm.go:319] 
	I1124 14:02:52.473251  212383 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token vzq4ay.serxkml6gk1378wv \
	I1124 14:02:52.473364  212383 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aa948289582a95f47bab77808ca51e5d74f41a914fe1740ab9448815f8011aac \
	I1124 14:02:52.473385  212383 kubeadm.go:319] 	--control-plane 
	I1124 14:02:52.473389  212383 kubeadm.go:319] 
	I1124 14:02:52.473481  212383 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 14:02:52.473484  212383 kubeadm.go:319] 
	I1124 14:02:52.473573  212383 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token vzq4ay.serxkml6gk1378wv \
	I1124 14:02:52.473696  212383 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aa948289582a95f47bab77808ca51e5d74f41a914fe1740ab9448815f8011aac 
	I1124 14:02:52.473705  212383 cni.go:84] Creating CNI manager for ""
	I1124 14:02:52.473711  212383 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 14:02:52.476852  212383 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 14:02:52.479922  212383 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 14:02:52.489605  212383 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 14:02:52.489623  212383 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 14:02:52.536790  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 14:02:53.413438  212383 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 14:02:53.413571  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:53.413654  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-609438 minikube.k8s.io/updated_at=2025_11_24T14_02_53_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=default-k8s-diff-port-609438 minikube.k8s.io/primary=true
	I1124 14:02:53.507283  212383 ops.go:34] apiserver oom_adj: -16
	I1124 14:02:53.863033  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:50.808351  213570 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002003298s
	I1124 14:02:50.815187  213570 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 14:02:50.815743  213570 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1124 14:02:50.816608  213570 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 14:02:50.818559  213570 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 14:02:54.363074  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:54.863777  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:55.363086  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:55.863114  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:56.363110  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:56.863441  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:57.057097  212383 kubeadm.go:1114] duration metric: took 3.643574546s to wait for elevateKubeSystemPrivileges
	I1124 14:02:57.057124  212383 kubeadm.go:403] duration metric: took 26.886093324s to StartCluster
	I1124 14:02:57.057141  212383 settings.go:142] acquiring lock: {Name:mk2b0bbff4d8ced468f457362668d43b813dc062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:57.057204  212383 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-2368/kubeconfig
	I1124 14:02:57.057903  212383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/kubeconfig: {Name:mk246d21eaffbd8aca2abdc1b2f89d6fcc902f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:57.058100  212383 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 14:02:57.058223  212383 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 14:02:57.058472  212383 config.go:182] Loaded profile config "default-k8s-diff-port-609438": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 14:02:57.058507  212383 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:02:57.058563  212383 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-609438"
	I1124 14:02:57.058577  212383 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-609438"
	I1124 14:02:57.058598  212383 host.go:66] Checking if "default-k8s-diff-port-609438" exists ...
	I1124 14:02:57.059105  212383 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-609438 --format={{.State.Status}}
	I1124 14:02:57.059672  212383 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-609438"
	I1124 14:02:57.059698  212383 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-609438"
	I1124 14:02:57.060034  212383 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-609438 --format={{.State.Status}}
	I1124 14:02:57.062295  212383 out.go:179] * Verifying Kubernetes components...
	I1124 14:02:57.067608  212383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:02:57.096470  212383 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 14:02:57.100431  212383 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:02:57.100453  212383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:02:57.100520  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:57.108007  212383 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-609438"
	I1124 14:02:57.108047  212383 host.go:66] Checking if "default-k8s-diff-port-609438" exists ...
	I1124 14:02:57.108469  212383 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-609438 --format={{.State.Status}}
	I1124 14:02:57.150290  212383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa Username:docker}
	I1124 14:02:57.151191  212383 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 14:02:57.151207  212383 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 14:02:57.151270  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:57.180229  212383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa Username:docker}
	I1124 14:02:57.835181  212383 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:02:57.835375  212383 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 14:02:57.843296  212383 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 14:02:58.048720  212383 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:02:55.577519  213570 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.75919955s
	I1124 14:02:57.488695  213570 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.669631688s
	I1124 14:02:59.319576  213570 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.503330978s
	I1124 14:02:59.347736  213570 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 14:02:59.365960  213570 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 14:02:59.389045  213570 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 14:02:59.389257  213570 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-593634 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 14:02:59.404075  213570 kubeadm.go:319] [bootstrap-token] Using token: sdluey.txxijid8fmo5jyau
	I1124 14:02:59.018640  212383 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.183422592s)
	I1124 14:02:59.019392  212383 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-609438" to be "Ready" ...
	I1124 14:02:59.019719  212383 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.176349884s)
	I1124 14:02:59.020165  212383 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.184766141s)
	I1124 14:02:59.020204  212383 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 14:02:59.505284  212383 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.456466205s)
	I1124 14:02:59.508376  212383 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1124 14:02:59.407186  213570 out.go:252]   - Configuring RBAC rules ...
	I1124 14:02:59.407326  213570 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 14:02:59.413876  213570 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 14:02:59.424114  213570 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 14:02:59.429247  213570 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 14:02:59.435888  213570 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 14:02:59.441214  213570 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 14:02:59.729166  213570 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 14:03:00.281783  213570 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 14:03:00.726578  213570 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 14:03:00.731583  213570 kubeadm.go:319] 
	I1124 14:03:00.731683  213570 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 14:03:00.731705  213570 kubeadm.go:319] 
	I1124 14:03:00.731783  213570 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 14:03:00.731791  213570 kubeadm.go:319] 
	I1124 14:03:00.731817  213570 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 14:03:00.731879  213570 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 14:03:00.731955  213570 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 14:03:00.731964  213570 kubeadm.go:319] 
	I1124 14:03:00.732019  213570 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 14:03:00.732029  213570 kubeadm.go:319] 
	I1124 14:03:00.732077  213570 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 14:03:00.732085  213570 kubeadm.go:319] 
	I1124 14:03:00.732143  213570 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 14:03:00.732222  213570 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 14:03:00.732296  213570 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 14:03:00.732305  213570 kubeadm.go:319] 
	I1124 14:03:00.732391  213570 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 14:03:00.732470  213570 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 14:03:00.732477  213570 kubeadm.go:319] 
	I1124 14:03:00.732562  213570 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token sdluey.txxijid8fmo5jyau \
	I1124 14:03:00.732674  213570 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aa948289582a95f47bab77808ca51e5d74f41a914fe1740ab9448815f8011aac \
	I1124 14:03:00.732700  213570 kubeadm.go:319] 	--control-plane 
	I1124 14:03:00.732708  213570 kubeadm.go:319] 
	I1124 14:03:00.732793  213570 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 14:03:00.732801  213570 kubeadm.go:319] 
	I1124 14:03:00.732883  213570 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token sdluey.txxijid8fmo5jyau \
	I1124 14:03:00.732989  213570 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aa948289582a95f47bab77808ca51e5d74f41a914fe1740ab9448815f8011aac 
	I1124 14:03:00.734466  213570 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 14:03:00.734704  213570 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 14:03:00.734818  213570 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 14:03:00.734840  213570 cni.go:84] Creating CNI manager for ""
	I1124 14:03:00.734847  213570 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 14:03:00.738356  213570 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 14:02:59.511261  212383 addons.go:530] duration metric: took 2.452743621s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1124 14:02:59.527883  212383 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-609438" context rescaled to 1 replicas
	W1124 14:03:01.022799  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:03.522484  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	I1124 14:03:00.741285  213570 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 14:03:00.747200  213570 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 14:03:00.747222  213570 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 14:03:00.762942  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 14:03:01.083756  213570 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 14:03:01.083943  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:01.084029  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-593634 minikube.k8s.io/updated_at=2025_11_24T14_03_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=embed-certs-593634 minikube.k8s.io/primary=true
	I1124 14:03:01.235259  213570 ops.go:34] apiserver oom_adj: -16
	I1124 14:03:01.235388  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:01.736213  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:02.235575  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:02.735531  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:03.235547  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:03.735985  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:04.235605  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:04.735509  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:05.235491  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:05.735597  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:05.862499  213570 kubeadm.go:1114] duration metric: took 4.778639859s to wait for elevateKubeSystemPrivileges
	I1124 14:03:05.862539  213570 kubeadm.go:403] duration metric: took 24.734468729s to StartCluster
	I1124 14:03:05.862559  213570 settings.go:142] acquiring lock: {Name:mk2b0bbff4d8ced468f457362668d43b813dc062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:03:05.862641  213570 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-2368/kubeconfig
	I1124 14:03:05.864034  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/kubeconfig: {Name:mk246d21eaffbd8aca2abdc1b2f89d6fcc902f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:03:05.864291  213570 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 14:03:05.864292  213570 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 14:03:05.864627  213570 config.go:182] Loaded profile config "embed-certs-593634": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 14:03:05.864675  213570 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:03:05.864760  213570 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-593634"
	I1124 14:03:05.864775  213570 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-593634"
	I1124 14:03:05.864814  213570 host.go:66] Checking if "embed-certs-593634" exists ...
	I1124 14:03:05.865448  213570 cli_runner.go:164] Run: docker container inspect embed-certs-593634 --format={{.State.Status}}
	I1124 14:03:05.865928  213570 addons.go:70] Setting default-storageclass=true in profile "embed-certs-593634"
	I1124 14:03:05.865962  213570 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-593634"
	I1124 14:03:05.866329  213570 cli_runner.go:164] Run: docker container inspect embed-certs-593634 --format={{.State.Status}}
	I1124 14:03:05.867882  213570 out.go:179] * Verifying Kubernetes components...
	I1124 14:03:05.871678  213570 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:03:05.918376  213570 addons.go:239] Setting addon default-storageclass=true in "embed-certs-593634"
	I1124 14:03:05.918427  213570 host.go:66] Checking if "embed-certs-593634" exists ...
	I1124 14:03:05.919006  213570 cli_runner.go:164] Run: docker container inspect embed-certs-593634 --format={{.State.Status}}
	I1124 14:03:05.928779  213570 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 14:03:05.931678  213570 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:03:05.931712  213570 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:03:05.931788  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:03:05.962335  213570 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 14:03:05.962376  213570 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 14:03:05.962476  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:03:05.993403  213570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa Username:docker}
	I1124 14:03:06.003508  213570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa Username:docker}
	I1124 14:03:06.391385  213570 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 14:03:06.391488  213570 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:03:06.435021  213570 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 14:03:06.439159  213570 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:03:06.771396  213570 node_ready.go:35] waiting up to 6m0s for node "embed-certs-593634" to be "Ready" ...
	I1124 14:03:06.771837  213570 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1124 14:03:07.089005  213570 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1124 14:03:06.022254  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:08.023381  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	I1124 14:03:07.091942  213570 addons.go:530] duration metric: took 1.22725676s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1124 14:03:07.275615  213570 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-593634" context rescaled to 1 replicas
	W1124 14:03:08.774304  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:10.522868  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:12.525848  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:10.776272  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:13.274310  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:15.274775  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:14.526016  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:17.023060  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:17.774691  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:20.274332  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:19.523467  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:21.524121  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:23.524697  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:22.774276  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:24.775051  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:26.022538  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:28.023018  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:27.274791  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:29.275073  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:30.030420  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:32.524753  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:31.774872  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:34.274493  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:35.023155  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:37.025173  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:36.275275  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:38.774804  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	I1124 14:03:39.023101  212383 node_ready.go:49] node "default-k8s-diff-port-609438" is "Ready"
	I1124 14:03:39.023134  212383 node_ready.go:38] duration metric: took 40.003724122s for node "default-k8s-diff-port-609438" to be "Ready" ...
	I1124 14:03:39.023149  212383 api_server.go:52] waiting for apiserver process to appear ...
	I1124 14:03:39.023211  212383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:03:39.035892  212383 api_server.go:72] duration metric: took 41.977763431s to wait for apiserver process to appear ...
	I1124 14:03:39.035957  212383 api_server.go:88] waiting for apiserver healthz status ...
	I1124 14:03:39.035992  212383 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1124 14:03:39.045601  212383 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1124 14:03:39.046766  212383 api_server.go:141] control plane version: v1.34.1
	I1124 14:03:39.046790  212383 api_server.go:131] duration metric: took 10.8162ms to wait for apiserver health ...
	I1124 14:03:39.046799  212383 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 14:03:39.057366  212383 system_pods.go:59] 8 kube-system pods found
	I1124 14:03:39.057464  212383 system_pods.go:61] "coredns-66bc5c9577-qctbs" [cd79b5ea-c0d8-4b5c-94a4-6743b53cd9de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:03:39.057486  212383 system_pods.go:61] "etcd-default-k8s-diff-port-609438" [3e2d5715-12d7-441e-9747-edb4c6f78577] Running
	I1124 14:03:39.057527  212383 system_pods.go:61] "kindnet-jcqb9" [92836c58-7b28-4b1b-838d-9491cd23823b] Running
	I1124 14:03:39.057552  212383 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-609438" [b6e69d70-9c7f-4b06-8ba8-a37c17d79bb9] Running
	I1124 14:03:39.057573  212383 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-609438" [a1dba2ac-ba3c-4282-966e-c7abffbb6b9a] Running
	I1124 14:03:39.057612  212383 system_pods.go:61] "kube-proxy-frlpg" [814cc9f1-7449-431c-a35d-3ac3b4d05db9] Running
	I1124 14:03:39.057637  212383 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-609438" [a87b6471-6253-4b9c-abd1-83d029df6343] Running
	I1124 14:03:39.057664  212383 system_pods.go:61] "storage-provisioner" [98d7eb97-3a94-4904-9af3-f063689cec40] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:03:39.057702  212383 system_pods.go:74] duration metric: took 10.895381ms to wait for pod list to return data ...
	I1124 14:03:39.057729  212383 default_sa.go:34] waiting for default service account to be created ...
	I1124 14:03:39.068310  212383 default_sa.go:45] found service account: "default"
	I1124 14:03:39.068335  212383 default_sa.go:55] duration metric: took 10.585051ms for default service account to be created ...
	I1124 14:03:39.068346  212383 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 14:03:39.072487  212383 system_pods.go:86] 8 kube-system pods found
	I1124 14:03:39.072578  212383 system_pods.go:89] "coredns-66bc5c9577-qctbs" [cd79b5ea-c0d8-4b5c-94a4-6743b53cd9de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:03:39.072601  212383 system_pods.go:89] "etcd-default-k8s-diff-port-609438" [3e2d5715-12d7-441e-9747-edb4c6f78577] Running
	I1124 14:03:39.072648  212383 system_pods.go:89] "kindnet-jcqb9" [92836c58-7b28-4b1b-838d-9491cd23823b] Running
	I1124 14:03:39.072673  212383 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-609438" [b6e69d70-9c7f-4b06-8ba8-a37c17d79bb9] Running
	I1124 14:03:39.072696  212383 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-609438" [a1dba2ac-ba3c-4282-966e-c7abffbb6b9a] Running
	I1124 14:03:39.072735  212383 system_pods.go:89] "kube-proxy-frlpg" [814cc9f1-7449-431c-a35d-3ac3b4d05db9] Running
	I1124 14:03:39.072761  212383 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-609438" [a87b6471-6253-4b9c-abd1-83d029df6343] Running
	I1124 14:03:39.072785  212383 system_pods.go:89] "storage-provisioner" [98d7eb97-3a94-4904-9af3-f063689cec40] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:03:39.072847  212383 retry.go:31] will retry after 264.799989ms: missing components: kube-dns
	I1124 14:03:39.342534  212383 system_pods.go:86] 8 kube-system pods found
	I1124 14:03:39.342686  212383 system_pods.go:89] "coredns-66bc5c9577-qctbs" [cd79b5ea-c0d8-4b5c-94a4-6743b53cd9de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:03:39.342725  212383 system_pods.go:89] "etcd-default-k8s-diff-port-609438" [3e2d5715-12d7-441e-9747-edb4c6f78577] Running
	I1124 14:03:39.342754  212383 system_pods.go:89] "kindnet-jcqb9" [92836c58-7b28-4b1b-838d-9491cd23823b] Running
	I1124 14:03:39.342775  212383 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-609438" [b6e69d70-9c7f-4b06-8ba8-a37c17d79bb9] Running
	I1124 14:03:39.342816  212383 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-609438" [a1dba2ac-ba3c-4282-966e-c7abffbb6b9a] Running
	I1124 14:03:39.342842  212383 system_pods.go:89] "kube-proxy-frlpg" [814cc9f1-7449-431c-a35d-3ac3b4d05db9] Running
	I1124 14:03:39.342864  212383 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-609438" [a87b6471-6253-4b9c-abd1-83d029df6343] Running
	I1124 14:03:39.342912  212383 system_pods.go:89] "storage-provisioner" [98d7eb97-3a94-4904-9af3-f063689cec40] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:03:39.342941  212383 retry.go:31] will retry after 272.670872ms: missing components: kube-dns
	I1124 14:03:39.626215  212383 system_pods.go:86] 8 kube-system pods found
	I1124 14:03:39.626242  212383 system_pods.go:89] "coredns-66bc5c9577-qctbs" [cd79b5ea-c0d8-4b5c-94a4-6743b53cd9de] Running
	I1124 14:03:39.626248  212383 system_pods.go:89] "etcd-default-k8s-diff-port-609438" [3e2d5715-12d7-441e-9747-edb4c6f78577] Running
	I1124 14:03:39.626254  212383 system_pods.go:89] "kindnet-jcqb9" [92836c58-7b28-4b1b-838d-9491cd23823b] Running
	I1124 14:03:39.626258  212383 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-609438" [b6e69d70-9c7f-4b06-8ba8-a37c17d79bb9] Running
	I1124 14:03:39.626271  212383 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-609438" [a1dba2ac-ba3c-4282-966e-c7abffbb6b9a] Running
	I1124 14:03:39.626274  212383 system_pods.go:89] "kube-proxy-frlpg" [814cc9f1-7449-431c-a35d-3ac3b4d05db9] Running
	I1124 14:03:39.626278  212383 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-609438" [a87b6471-6253-4b9c-abd1-83d029df6343] Running
	I1124 14:03:39.626282  212383 system_pods.go:89] "storage-provisioner" [98d7eb97-3a94-4904-9af3-f063689cec40] Running
	I1124 14:03:39.626289  212383 system_pods.go:126] duration metric: took 557.937565ms to wait for k8s-apps to be running ...
	I1124 14:03:39.626297  212383 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 14:03:39.626351  212383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:03:39.649756  212383 system_svc.go:56] duration metric: took 23.432209ms WaitForService to wait for kubelet
	I1124 14:03:39.649833  212383 kubeadm.go:587] duration metric: took 42.591709093s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:03:39.649867  212383 node_conditions.go:102] verifying NodePressure condition ...
	I1124 14:03:39.658388  212383 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 14:03:39.658418  212383 node_conditions.go:123] node cpu capacity is 2
	I1124 14:03:39.658433  212383 node_conditions.go:105] duration metric: took 8.545281ms to run NodePressure ...
	I1124 14:03:39.658445  212383 start.go:242] waiting for startup goroutines ...
	I1124 14:03:39.658453  212383 start.go:247] waiting for cluster config update ...
	I1124 14:03:39.658464  212383 start.go:256] writing updated cluster config ...
	I1124 14:03:39.658759  212383 ssh_runner.go:195] Run: rm -f paused
	I1124 14:03:39.662925  212383 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:03:39.668038  212383 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qctbs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:39.673734  212383 pod_ready.go:94] pod "coredns-66bc5c9577-qctbs" is "Ready"
	I1124 14:03:39.673815  212383 pod_ready.go:86] duration metric: took 5.694049ms for pod "coredns-66bc5c9577-qctbs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:39.676472  212383 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:39.685362  212383 pod_ready.go:94] pod "etcd-default-k8s-diff-port-609438" is "Ready"
	I1124 14:03:39.685439  212383 pod_ready.go:86] duration metric: took 8.894816ms for pod "etcd-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:39.688312  212383 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:39.695577  212383 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-609438" is "Ready"
	I1124 14:03:39.695663  212383 pod_ready.go:86] duration metric: took 7.234136ms for pod "kube-apiserver-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:39.698560  212383 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:40.070303  212383 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-609438" is "Ready"
	I1124 14:03:40.070379  212383 pod_ready.go:86] duration metric: took 371.738474ms for pod "kube-controller-manager-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:40.267521  212383 pod_ready.go:83] waiting for pod "kube-proxy-frlpg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:40.667723  212383 pod_ready.go:94] pod "kube-proxy-frlpg" is "Ready"
	I1124 14:03:40.667753  212383 pod_ready.go:86] duration metric: took 400.161589ms for pod "kube-proxy-frlpg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:40.868901  212383 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:41.268703  212383 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-609438" is "Ready"
	I1124 14:03:41.268732  212383 pod_ready.go:86] duration metric: took 399.797357ms for pod "kube-scheduler-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:41.268746  212383 pod_ready.go:40] duration metric: took 1.605732693s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:03:41.331086  212383 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 14:03:41.336425  212383 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-609438" cluster and "default" namespace by default
	W1124 14:03:41.279143  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:43.774833  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:45.775431  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:48.275442  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	I1124 14:03:48.774783  213570 node_ready.go:49] node "embed-certs-593634" is "Ready"
	I1124 14:03:48.774815  213570 node_ready.go:38] duration metric: took 42.00333297s for node "embed-certs-593634" to be "Ready" ...
	I1124 14:03:48.774830  213570 api_server.go:52] waiting for apiserver process to appear ...
	I1124 14:03:48.774888  213570 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:03:48.787878  213570 api_server.go:72] duration metric: took 42.923556551s to wait for apiserver process to appear ...
	I1124 14:03:48.787947  213570 api_server.go:88] waiting for apiserver healthz status ...
	I1124 14:03:48.787968  213570 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 14:03:48.796278  213570 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1124 14:03:48.797266  213570 api_server.go:141] control plane version: v1.34.1
	I1124 14:03:48.797292  213570 api_server.go:131] duration metric: took 9.336207ms to wait for apiserver health ...
	I1124 14:03:48.797301  213570 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 14:03:48.800410  213570 system_pods.go:59] 8 kube-system pods found
	I1124 14:03:48.800444  213570 system_pods.go:61] "coredns-66bc5c9577-jjgxr" [9d72d6f6-d1ef-4fcd-9e24-be088e8a5e66] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:03:48.800451  213570 system_pods.go:61] "etcd-embed-certs-593634" [1ad343da-778d-475d-a5ce-fc08e11f693c] Running
	I1124 14:03:48.800456  213570 system_pods.go:61] "kindnet-2xhmk" [a716bd95-8847-4c78-a39c-0234825c66fb] Running
	I1124 14:03:48.800460  213570 system_pods.go:61] "kube-apiserver-embed-certs-593634" [2a958fa8-640e-4d6e-80a4-4cb5abb541bf] Running
	I1124 14:03:48.800464  213570 system_pods.go:61] "kube-controller-manager-embed-certs-593634" [5897c242-4f69-4740-bc24-712bc8bdb2f6] Running
	I1124 14:03:48.800468  213570 system_pods.go:61] "kube-proxy-t2c22" [62324907-3da3-4c2c-887d-798d8375da05] Running
	I1124 14:03:48.800472  213570 system_pods.go:61] "kube-scheduler-embed-certs-593634" [b881f394-008a-4da5-87fe-94a9d922e12c] Running
	I1124 14:03:48.800477  213570 system_pods.go:61] "storage-provisioner" [45b3891f-97a3-4dcb-bafa-b1400a3b4480] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:03:48.800489  213570 system_pods.go:74] duration metric: took 3.183028ms to wait for pod list to return data ...
	I1124 14:03:48.800497  213570 default_sa.go:34] waiting for default service account to be created ...
	I1124 14:03:48.803083  213570 default_sa.go:45] found service account: "default"
	I1124 14:03:48.803109  213570 default_sa.go:55] duration metric: took 2.606184ms for default service account to be created ...
	I1124 14:03:48.803119  213570 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 14:03:48.806286  213570 system_pods.go:86] 8 kube-system pods found
	I1124 14:03:48.806321  213570 system_pods.go:89] "coredns-66bc5c9577-jjgxr" [9d72d6f6-d1ef-4fcd-9e24-be088e8a5e66] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:03:48.806328  213570 system_pods.go:89] "etcd-embed-certs-593634" [1ad343da-778d-475d-a5ce-fc08e11f693c] Running
	I1124 14:03:48.806334  213570 system_pods.go:89] "kindnet-2xhmk" [a716bd95-8847-4c78-a39c-0234825c66fb] Running
	I1124 14:03:48.806365  213570 system_pods.go:89] "kube-apiserver-embed-certs-593634" [2a958fa8-640e-4d6e-80a4-4cb5abb541bf] Running
	I1124 14:03:48.806377  213570 system_pods.go:89] "kube-controller-manager-embed-certs-593634" [5897c242-4f69-4740-bc24-712bc8bdb2f6] Running
	I1124 14:03:48.806381  213570 system_pods.go:89] "kube-proxy-t2c22" [62324907-3da3-4c2c-887d-798d8375da05] Running
	I1124 14:03:48.806385  213570 system_pods.go:89] "kube-scheduler-embed-certs-593634" [b881f394-008a-4da5-87fe-94a9d922e12c] Running
	I1124 14:03:48.806395  213570 system_pods.go:89] "storage-provisioner" [45b3891f-97a3-4dcb-bafa-b1400a3b4480] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:03:48.806421  213570 retry.go:31] will retry after 312.175321ms: missing components: kube-dns
	I1124 14:03:49.124170  213570 system_pods.go:86] 8 kube-system pods found
	I1124 14:03:49.124261  213570 system_pods.go:89] "coredns-66bc5c9577-jjgxr" [9d72d6f6-d1ef-4fcd-9e24-be088e8a5e66] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:03:49.124283  213570 system_pods.go:89] "etcd-embed-certs-593634" [1ad343da-778d-475d-a5ce-fc08e11f693c] Running
	I1124 14:03:49.124327  213570 system_pods.go:89] "kindnet-2xhmk" [a716bd95-8847-4c78-a39c-0234825c66fb] Running
	I1124 14:03:49.124354  213570 system_pods.go:89] "kube-apiserver-embed-certs-593634" [2a958fa8-640e-4d6e-80a4-4cb5abb541bf] Running
	I1124 14:03:49.124376  213570 system_pods.go:89] "kube-controller-manager-embed-certs-593634" [5897c242-4f69-4740-bc24-712bc8bdb2f6] Running
	I1124 14:03:49.124412  213570 system_pods.go:89] "kube-proxy-t2c22" [62324907-3da3-4c2c-887d-798d8375da05] Running
	I1124 14:03:49.124439  213570 system_pods.go:89] "kube-scheduler-embed-certs-593634" [b881f394-008a-4da5-87fe-94a9d922e12c] Running
	I1124 14:03:49.124462  213570 system_pods.go:89] "storage-provisioner" [45b3891f-97a3-4dcb-bafa-b1400a3b4480] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:03:49.124508  213570 retry.go:31] will retry after 274.806291ms: missing components: kube-dns
	I1124 14:03:49.404719  213570 system_pods.go:86] 8 kube-system pods found
	I1124 14:03:49.404754  213570 system_pods.go:89] "coredns-66bc5c9577-jjgxr" [9d72d6f6-d1ef-4fcd-9e24-be088e8a5e66] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:03:49.404761  213570 system_pods.go:89] "etcd-embed-certs-593634" [1ad343da-778d-475d-a5ce-fc08e11f693c] Running
	I1124 14:03:49.404768  213570 system_pods.go:89] "kindnet-2xhmk" [a716bd95-8847-4c78-a39c-0234825c66fb] Running
	I1124 14:03:49.404772  213570 system_pods.go:89] "kube-apiserver-embed-certs-593634" [2a958fa8-640e-4d6e-80a4-4cb5abb541bf] Running
	I1124 14:03:49.404776  213570 system_pods.go:89] "kube-controller-manager-embed-certs-593634" [5897c242-4f69-4740-bc24-712bc8bdb2f6] Running
	I1124 14:03:49.404780  213570 system_pods.go:89] "kube-proxy-t2c22" [62324907-3da3-4c2c-887d-798d8375da05] Running
	I1124 14:03:49.404784  213570 system_pods.go:89] "kube-scheduler-embed-certs-593634" [b881f394-008a-4da5-87fe-94a9d922e12c] Running
	I1124 14:03:49.404789  213570 system_pods.go:89] "storage-provisioner" [45b3891f-97a3-4dcb-bafa-b1400a3b4480] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:03:49.404803  213570 retry.go:31] will retry after 483.554421ms: missing components: kube-dns
	I1124 14:03:49.894105  213570 system_pods.go:86] 8 kube-system pods found
	I1124 14:03:49.894135  213570 system_pods.go:89] "coredns-66bc5c9577-jjgxr" [9d72d6f6-d1ef-4fcd-9e24-be088e8a5e66] Running
	I1124 14:03:49.894142  213570 system_pods.go:89] "etcd-embed-certs-593634" [1ad343da-778d-475d-a5ce-fc08e11f693c] Running
	I1124 14:03:49.894146  213570 system_pods.go:89] "kindnet-2xhmk" [a716bd95-8847-4c78-a39c-0234825c66fb] Running
	I1124 14:03:49.894151  213570 system_pods.go:89] "kube-apiserver-embed-certs-593634" [2a958fa8-640e-4d6e-80a4-4cb5abb541bf] Running
	I1124 14:03:49.894156  213570 system_pods.go:89] "kube-controller-manager-embed-certs-593634" [5897c242-4f69-4740-bc24-712bc8bdb2f6] Running
	I1124 14:03:49.894161  213570 system_pods.go:89] "kube-proxy-t2c22" [62324907-3da3-4c2c-887d-798d8375da05] Running
	I1124 14:03:49.894165  213570 system_pods.go:89] "kube-scheduler-embed-certs-593634" [b881f394-008a-4da5-87fe-94a9d922e12c] Running
	I1124 14:03:49.894169  213570 system_pods.go:89] "storage-provisioner" [45b3891f-97a3-4dcb-bafa-b1400a3b4480] Running
	I1124 14:03:49.894178  213570 system_pods.go:126] duration metric: took 1.091052703s to wait for k8s-apps to be running ...
	I1124 14:03:49.894185  213570 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 14:03:49.894238  213570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:03:49.917451  213570 system_svc.go:56] duration metric: took 23.256451ms WaitForService to wait for kubelet
	I1124 14:03:49.917492  213570 kubeadm.go:587] duration metric: took 44.053162457s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:03:49.917516  213570 node_conditions.go:102] verifying NodePressure condition ...
	I1124 14:03:49.923758  213570 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 14:03:49.923792  213570 node_conditions.go:123] node cpu capacity is 2
	I1124 14:03:49.923807  213570 node_conditions.go:105] duration metric: took 6.285283ms to run NodePressure ...
	I1124 14:03:49.923820  213570 start.go:242] waiting for startup goroutines ...
	I1124 14:03:49.923828  213570 start.go:247] waiting for cluster config update ...
	I1124 14:03:49.923839  213570 start.go:256] writing updated cluster config ...
	I1124 14:03:49.924206  213570 ssh_runner.go:195] Run: rm -f paused
	I1124 14:03:49.927626  213570 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:03:49.931893  213570 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jjgxr" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:49.942828  213570 pod_ready.go:94] pod "coredns-66bc5c9577-jjgxr" is "Ready"
	I1124 14:03:49.942856  213570 pod_ready.go:86] duration metric: took 10.828769ms for pod "coredns-66bc5c9577-jjgxr" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:49.945912  213570 pod_ready.go:83] waiting for pod "etcd-embed-certs-593634" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:49.951340  213570 pod_ready.go:94] pod "etcd-embed-certs-593634" is "Ready"
	I1124 14:03:49.951371  213570 pod_ready.go:86] duration metric: took 5.432769ms for pod "etcd-embed-certs-593634" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:49.955119  213570 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-593634" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:49.962767  213570 pod_ready.go:94] pod "kube-apiserver-embed-certs-593634" is "Ready"
	I1124 14:03:49.962795  213570 pod_ready.go:86] duration metric: took 7.64808ms for pod "kube-apiserver-embed-certs-593634" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:49.966857  213570 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-593634" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:50.332804  213570 pod_ready.go:94] pod "kube-controller-manager-embed-certs-593634" is "Ready"
	I1124 14:03:50.332831  213570 pod_ready.go:86] duration metric: took 365.944063ms for pod "kube-controller-manager-embed-certs-593634" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:50.533022  213570 pod_ready.go:83] waiting for pod "kube-proxy-t2c22" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:50.932652  213570 pod_ready.go:94] pod "kube-proxy-t2c22" is "Ready"
	I1124 14:03:50.932687  213570 pod_ready.go:86] duration metric: took 399.640527ms for pod "kube-proxy-t2c22" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:51.133145  213570 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-593634" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:51.532686  213570 pod_ready.go:94] pod "kube-scheduler-embed-certs-593634" is "Ready"
	I1124 14:03:51.532723  213570 pod_ready.go:86] duration metric: took 399.546574ms for pod "kube-scheduler-embed-certs-593634" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:51.532738  213570 pod_ready.go:40] duration metric: took 1.605063201s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:03:51.763100  213570 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 14:03:51.766630  213570 out.go:179] * Done! kubectl is now configured to use "embed-certs-593634" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	9c4f0887e02e4       1611cd07b61d5       9 seconds ago        Running             busybox                   0                   5b44381c74ffd       busybox                                                default
	00dfaea3cc3d9       ba04bb24b9575       14 seconds ago       Running             storage-provisioner       0                   abcbb29d89b8e       storage-provisioner                                    kube-system
	ed166e253240c       138784d87c9c5       14 seconds ago       Running             coredns                   0                   d4e02c124f709       coredns-66bc5c9577-qctbs                               kube-system
	0d8cc01f3acbd       05baa95f5142d       55 seconds ago       Running             kube-proxy                0                   9d1ee823a15c2       kube-proxy-frlpg                                       kube-system
	d0f3f67b1f102       b1a8c6f707935       55 seconds ago       Running             kindnet-cni               0                   f1a6d1e17d43d       kindnet-jcqb9                                          kube-system
	459ad362844ec       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   d34c569cc4626       kube-scheduler-default-k8s-diff-port-609438            kube-system
	eb13e64310f28       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   4bcf9deea4a1d       kube-controller-manager-default-k8s-diff-port-609438   kube-system
	be628a67cb3ed       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   6f6ebbd6fbb40       kube-apiserver-default-k8s-diff-port-609438            kube-system
	a79dfb2c6db31       a1894772a478e       About a minute ago   Running             etcd                      0                   2ad7a160ea4de       etcd-default-k8s-diff-port-609438                      kube-system
	
	
	==> containerd <==
	Nov 24 14:03:39 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:39.269240975Z" level=info msg="CreateContainer within sandbox \"d4e02c124f709296589df54e8f7f93d43ee806dccbd26464d609201e03032544\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ed166e253240cdbdfc56301dbd8d8567b59792fb85b0d0dbd0d72189e5a069d5\""
	Nov 24 14:03:39 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:39.271167743Z" level=info msg="StartContainer for \"ed166e253240cdbdfc56301dbd8d8567b59792fb85b0d0dbd0d72189e5a069d5\""
	Nov 24 14:03:39 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:39.276670117Z" level=info msg="connecting to shim ed166e253240cdbdfc56301dbd8d8567b59792fb85b0d0dbd0d72189e5a069d5" address="unix:///run/containerd/s/3852ad87deb539a683ba63f41c208f0c64160eea58fb3338df14e43fd97e9a37" protocol=ttrpc version=3
	Nov 24 14:03:39 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:39.279697844Z" level=info msg="Container 00dfaea3cc3d9d681a16a40db39e8b36acc58147c4a4bcba29b9f0947732bc54: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 14:03:39 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:39.293323759Z" level=info msg="CreateContainer within sandbox \"abcbb29d89b8effef39f23c0f3f77af0f2383dff37fdf8b1ab9e42b1a8a9a5e9\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"00dfaea3cc3d9d681a16a40db39e8b36acc58147c4a4bcba29b9f0947732bc54\""
	Nov 24 14:03:39 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:39.296512703Z" level=info msg="StartContainer for \"00dfaea3cc3d9d681a16a40db39e8b36acc58147c4a4bcba29b9f0947732bc54\""
	Nov 24 14:03:39 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:39.299384294Z" level=info msg="connecting to shim 00dfaea3cc3d9d681a16a40db39e8b36acc58147c4a4bcba29b9f0947732bc54" address="unix:///run/containerd/s/a9c241791f911861b5cfcd3b9aec455e35e631195cc17f0ac97e7cb03001f314" protocol=ttrpc version=3
	Nov 24 14:03:39 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:39.363080677Z" level=info msg="StartContainer for \"ed166e253240cdbdfc56301dbd8d8567b59792fb85b0d0dbd0d72189e5a069d5\" returns successfully"
	Nov 24 14:03:39 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:39.403834064Z" level=info msg="StartContainer for \"00dfaea3cc3d9d681a16a40db39e8b36acc58147c4a4bcba29b9f0947732bc54\" returns successfully"
	Nov 24 14:03:41 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:41.908603439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:ad098064-4a27-4674-9c05-03b1e253a816,Namespace:default,Attempt:0,}"
	Nov 24 14:03:41 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:41.958212830Z" level=info msg="connecting to shim 5b44381c74ffdb59c1d068c8d245c0227120a165ab453544aa62d965abc8e01b" address="unix:///run/containerd/s/305b5a19ae168cde4a06a02c0e2cd9e74d7b68984a3b039bcd720f4b331aa00b" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 14:03:42 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:42.028125441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:ad098064-4a27-4674-9c05-03b1e253a816,Namespace:default,Attempt:0,} returns sandbox id \"5b44381c74ffdb59c1d068c8d245c0227120a165ab453544aa62d965abc8e01b\""
	Nov 24 14:03:42 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:42.032618827Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 14:03:44 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:44.240175136Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 14:03:44 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:44.242061787Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937186"
	Nov 24 14:03:44 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:44.244537943Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 14:03:44 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:44.247794400Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 14:03:44 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:44.248489579Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.21564958s"
	Nov 24 14:03:44 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:44.248615816Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 24 14:03:44 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:44.255545070Z" level=info msg="CreateContainer within sandbox \"5b44381c74ffdb59c1d068c8d245c0227120a165ab453544aa62d965abc8e01b\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 14:03:44 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:44.268514231Z" level=info msg="Container 9c4f0887e02e4e2a389390604a999bdeb395e0061f85b2733f3f009c841ec536: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 14:03:44 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:44.279713475Z" level=info msg="CreateContainer within sandbox \"5b44381c74ffdb59c1d068c8d245c0227120a165ab453544aa62d965abc8e01b\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"9c4f0887e02e4e2a389390604a999bdeb395e0061f85b2733f3f009c841ec536\""
	Nov 24 14:03:44 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:44.280670048Z" level=info msg="StartContainer for \"9c4f0887e02e4e2a389390604a999bdeb395e0061f85b2733f3f009c841ec536\""
	Nov 24 14:03:44 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:44.281844421Z" level=info msg="connecting to shim 9c4f0887e02e4e2a389390604a999bdeb395e0061f85b2733f3f009c841ec536" address="unix:///run/containerd/s/305b5a19ae168cde4a06a02c0e2cd9e74d7b68984a3b039bcd720f4b331aa00b" protocol=ttrpc version=3
	Nov 24 14:03:44 default-k8s-diff-port-609438 containerd[758]: time="2025-11-24T14:03:44.340743232Z" level=info msg="StartContainer for \"9c4f0887e02e4e2a389390604a999bdeb395e0061f85b2733f3f009c841ec536\" returns successfully"
	
	
	==> coredns [ed166e253240cdbdfc56301dbd8d8567b59792fb85b0d0dbd0d72189e5a069d5] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47448 - 6948 "HINFO IN 4773065237209705457.4329358106017141151. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03988351s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-609438
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-609438
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=default-k8s-diff-port-609438
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T14_02_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 14:02:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-609438
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:03:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:03:38 +0000   Mon, 24 Nov 2025 14:02:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:03:38 +0000   Mon, 24 Nov 2025 14:02:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:03:38 +0000   Mon, 24 Nov 2025 14:02:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 14:03:38 +0000   Mon, 24 Nov 2025 14:03:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-609438
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                cceb120a-9f59-48c4-a660-aa41bd8d88a2
	  Boot ID:                    dd480c26-e101-4930-b98c-54c06b430fdc
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-qctbs                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-default-k8s-diff-port-609438                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         60s
	  kube-system                 kindnet-jcqb9                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-default-k8s-diff-port-609438             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-609438    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-frlpg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-default-k8s-diff-port-609438             100m (5%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   NodeHasSufficientMemory  73s (x8 over 73s)  kubelet          Node default-k8s-diff-port-609438 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    73s (x8 over 73s)  kubelet          Node default-k8s-diff-port-609438 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     73s (x7 over 73s)  kubelet          Node default-k8s-diff-port-609438 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  73s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  61s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node default-k8s-diff-port-609438 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node default-k8s-diff-port-609438 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node default-k8s-diff-port-609438 status is now: NodeHasSufficientPID
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           57s                node-controller  Node default-k8s-diff-port-609438 event: Registered Node default-k8s-diff-port-609438 in Controller
	  Normal   NodeReady                15s                kubelet          Node default-k8s-diff-port-609438 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 12:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014697] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497291] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033884] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.804993] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.476130] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [a79dfb2c6db3185c247a8edea7f54f9694063835ada40e0d4f8bb18721962197] <==
	{"level":"warn","ts":"2025-11-24T14:02:45.852115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:45.891272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:45.932613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:45.942867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:45.970765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.000119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.026133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.061726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.089536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.155658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.168663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.182002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.212157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.258107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.268641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.314942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.343824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.367560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.408164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.434863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.458611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.496063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.518190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.539111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:46.684310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55708","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:03:53 up  1:46,  0 user,  load average: 3.04, 3.45, 3.05
	Linux default-k8s-diff-port-609438 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d0f3f67b1f102d80491052bfec95c49cc4eadbe3bff4a7d6a3ed0fd779addfd1] <==
	I1124 14:02:58.476159       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:02:58.476391       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 14:02:58.476496       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:02:58.476508       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:02:58.476521       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:02:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:02:58.686498       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:02:58.686564       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:02:58.686574       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:02:58.760440       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 14:03:28.686515       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 14:03:28.686515       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 14:03:28.687855       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 14:03:28.761422       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1124 14:03:29.687082       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 14:03:29.687377       1 metrics.go:72] Registering metrics
	I1124 14:03:29.687567       1 controller.go:711] "Syncing nftables rules"
	I1124 14:03:38.694573       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:03:38.694635       1 main.go:301] handling current node
	I1124 14:03:48.688498       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:03:48.688548       1 main.go:301] handling current node
	
	
	==> kube-apiserver [be628a67cb3edc8f555e0e4a52eb70c6cfbc1b59edfe16c9b0515c4976eefd13] <==
	I1124 14:02:48.317834       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 14:02:48.322311       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 14:02:48.362195       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 14:02:48.435618       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 14:02:48.462052       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 14:02:48.476837       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:02:48.538533       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:02:48.558711       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 14:02:48.946346       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 14:02:48.973136       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 14:02:48.974079       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:02:50.475026       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 14:02:50.569650       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 14:02:50.788078       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 14:02:50.799168       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1124 14:02:50.800958       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 14:02:50.821904       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:02:51.249509       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 14:02:51.864387       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 14:02:51.884230       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 14:02:51.908911       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 14:02:56.757961       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:02:56.774275       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:02:57.057851       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 14:02:57.407948       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [eb13e64310f2866a582c82705404d464b3ef8275165d8ff7ddf618f5224a962b] <==
	I1124 14:02:56.512434       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 14:02:56.512524       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 14:02:56.520716       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 14:02:56.528422       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:02:56.532707       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 14:02:56.545487       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 14:02:56.545548       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 14:02:56.545631       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 14:02:56.545705       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-609438"
	I1124 14:02:56.545741       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1124 14:02:56.545776       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 14:02:56.545806       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 14:02:56.545963       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 14:02:56.547854       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:02:56.557576       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 14:02:56.557819       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 14:02:56.557841       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 14:02:56.560419       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 14:02:56.585155       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:02:56.592148       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 14:02:56.592148       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 14:02:56.595983       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:02:56.596009       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 14:02:56.596017       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 14:03:41.552401       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [0d8cc01f3acbdf10be8708ac1417428a3f6e27d5d8157f32bd1a5668a144a05e] <==
	I1124 14:02:58.748852       1 server_linux.go:53] "Using iptables proxy"
	I1124 14:02:58.849738       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 14:02:58.952310       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 14:02:58.952351       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 14:02:58.952438       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 14:02:59.007296       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:02:59.007388       1 server_linux.go:132] "Using iptables Proxier"
	I1124 14:02:59.040244       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 14:02:59.040760       1 server.go:527] "Version info" version="v1.34.1"
	I1124 14:02:59.040789       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:02:59.046828       1 config.go:200] "Starting service config controller"
	I1124 14:02:59.046849       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 14:02:59.046877       1 config.go:106] "Starting endpoint slice config controller"
	I1124 14:02:59.046883       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 14:02:59.046909       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 14:02:59.046919       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 14:02:59.047701       1 config.go:309] "Starting node config controller"
	I1124 14:02:59.047722       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 14:02:59.047728       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 14:02:59.148034       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 14:02:59.148081       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 14:02:59.159995       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [459ad362844ecb08400a072a5a4113b697f5c8f001d2e3d39582353e18a4c77b] <==
	I1124 14:02:46.744784       1 serving.go:386] Generated self-signed cert in-memory
	I1124 14:02:51.010999       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 14:02:51.011038       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:02:51.016297       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1124 14:02:51.016551       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1124 14:02:51.016712       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:02:51.016805       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:02:51.016731       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 14:02:51.016753       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 14:02:51.016767       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:02:51.017340       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:02:51.119384       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1124 14:02:51.119506       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:02:51.119565       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 14:02:53 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:02:53.489729    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-609438" podStartSLOduration=0.489692096 podStartE2EDuration="489.692096ms" podCreationTimestamp="2025-11-24 14:02:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:02:53.468145965 +0000 UTC m=+1.657769133" watchObservedRunningTime="2025-11-24 14:02:53.489692096 +0000 UTC m=+1.679315263"
	Nov 24 14:02:53 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:02:53.519203    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-609438" podStartSLOduration=0.519183143 podStartE2EDuration="519.183143ms" podCreationTimestamp="2025-11-24 14:02:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:02:53.490183482 +0000 UTC m=+1.679806674" watchObservedRunningTime="2025-11-24 14:02:53.519183143 +0000 UTC m=+1.708806335"
	Nov 24 14:02:53 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:02:53.552307    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-609438" podStartSLOduration=0.552288297 podStartE2EDuration="552.288297ms" podCreationTimestamp="2025-11-24 14:02:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:02:53.519051531 +0000 UTC m=+1.708674723" watchObservedRunningTime="2025-11-24 14:02:53.552288297 +0000 UTC m=+1.741911464"
	Nov 24 14:02:56 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:02:56.554079    1473 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 14:02:56 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:02:56.555454    1473 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 14:02:57 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:02:57.388131    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92836c58-7b28-4b1b-838d-9491cd23823b-lib-modules\") pod \"kindnet-jcqb9\" (UID: \"92836c58-7b28-4b1b-838d-9491cd23823b\") " pod="kube-system/kindnet-jcqb9"
	Nov 24 14:02:57 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:02:57.388181    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8lr4\" (UniqueName: \"kubernetes.io/projected/92836c58-7b28-4b1b-838d-9491cd23823b-kube-api-access-t8lr4\") pod \"kindnet-jcqb9\" (UID: \"92836c58-7b28-4b1b-838d-9491cd23823b\") " pod="kube-system/kindnet-jcqb9"
	Nov 24 14:02:57 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:02:57.388207    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/814cc9f1-7449-431c-a35d-3ac3b4d05db9-kube-proxy\") pod \"kube-proxy-frlpg\" (UID: \"814cc9f1-7449-431c-a35d-3ac3b4d05db9\") " pod="kube-system/kube-proxy-frlpg"
	Nov 24 14:02:57 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:02:57.388225    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/814cc9f1-7449-431c-a35d-3ac3b4d05db9-xtables-lock\") pod \"kube-proxy-frlpg\" (UID: \"814cc9f1-7449-431c-a35d-3ac3b4d05db9\") " pod="kube-system/kube-proxy-frlpg"
	Nov 24 14:02:57 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:02:57.388242    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/814cc9f1-7449-431c-a35d-3ac3b4d05db9-lib-modules\") pod \"kube-proxy-frlpg\" (UID: \"814cc9f1-7449-431c-a35d-3ac3b4d05db9\") " pod="kube-system/kube-proxy-frlpg"
	Nov 24 14:02:57 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:02:57.388261    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/92836c58-7b28-4b1b-838d-9491cd23823b-cni-cfg\") pod \"kindnet-jcqb9\" (UID: \"92836c58-7b28-4b1b-838d-9491cd23823b\") " pod="kube-system/kindnet-jcqb9"
	Nov 24 14:02:57 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:02:57.388277    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lllv\" (UniqueName: \"kubernetes.io/projected/814cc9f1-7449-431c-a35d-3ac3b4d05db9-kube-api-access-5lllv\") pod \"kube-proxy-frlpg\" (UID: \"814cc9f1-7449-431c-a35d-3ac3b4d05db9\") " pod="kube-system/kube-proxy-frlpg"
	Nov 24 14:02:57 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:02:57.388298    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92836c58-7b28-4b1b-838d-9491cd23823b-xtables-lock\") pod \"kindnet-jcqb9\" (UID: \"92836c58-7b28-4b1b-838d-9491cd23823b\") " pod="kube-system/kindnet-jcqb9"
	Nov 24 14:02:57 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:02:57.567562    1473 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 24 14:02:59 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:02:59.480198    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-jcqb9" podStartSLOduration=2.480176019 podStartE2EDuration="2.480176019s" podCreationTimestamp="2025-11-24 14:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:02:58.497591111 +0000 UTC m=+6.687214303" watchObservedRunningTime="2025-11-24 14:02:59.480176019 +0000 UTC m=+7.669799186"
	Nov 24 14:03:02 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:03:02.316210    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-frlpg" podStartSLOduration=5.316187635 podStartE2EDuration="5.316187635s" podCreationTimestamp="2025-11-24 14:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:02:59.480370113 +0000 UTC m=+7.669993281" watchObservedRunningTime="2025-11-24 14:03:02.316187635 +0000 UTC m=+10.505810803"
	Nov 24 14:03:38 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:03:38.737042    1473 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 14:03:38 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:03:38.944808    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/98d7eb97-3a94-4904-9af3-f063689cec40-tmp\") pod \"storage-provisioner\" (UID: \"98d7eb97-3a94-4904-9af3-f063689cec40\") " pod="kube-system/storage-provisioner"
	Nov 24 14:03:38 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:03:38.944876    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpl6d\" (UniqueName: \"kubernetes.io/projected/98d7eb97-3a94-4904-9af3-f063689cec40-kube-api-access-hpl6d\") pod \"storage-provisioner\" (UID: \"98d7eb97-3a94-4904-9af3-f063689cec40\") " pod="kube-system/storage-provisioner"
	Nov 24 14:03:38 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:03:38.944900    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd79b5ea-c0d8-4b5c-94a4-6743b53cd9de-config-volume\") pod \"coredns-66bc5c9577-qctbs\" (UID: \"cd79b5ea-c0d8-4b5c-94a4-6743b53cd9de\") " pod="kube-system/coredns-66bc5c9577-qctbs"
	Nov 24 14:03:38 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:03:38.944920    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm8bp\" (UniqueName: \"kubernetes.io/projected/cd79b5ea-c0d8-4b5c-94a4-6743b53cd9de-kube-api-access-rm8bp\") pod \"coredns-66bc5c9577-qctbs\" (UID: \"cd79b5ea-c0d8-4b5c-94a4-6743b53cd9de\") " pod="kube-system/coredns-66bc5c9577-qctbs"
	Nov 24 14:03:39 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:03:39.600204    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=40.600171711 podStartE2EDuration="40.600171711s" podCreationTimestamp="2025-11-24 14:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:03:39.599491432 +0000 UTC m=+47.789114617" watchObservedRunningTime="2025-11-24 14:03:39.600171711 +0000 UTC m=+47.789794879"
	Nov 24 14:03:39 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:03:39.600442    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qctbs" podStartSLOduration=42.600434459 podStartE2EDuration="42.600434459s" podCreationTimestamp="2025-11-24 14:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:03:39.57748311 +0000 UTC m=+47.767106351" watchObservedRunningTime="2025-11-24 14:03:39.600434459 +0000 UTC m=+47.790057635"
	Nov 24 14:03:41 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:03:41.768952    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4fpk\" (UniqueName: \"kubernetes.io/projected/ad098064-4a27-4674-9c05-03b1e253a816-kube-api-access-d4fpk\") pod \"busybox\" (UID: \"ad098064-4a27-4674-9c05-03b1e253a816\") " pod="default/busybox"
	Nov 24 14:03:44 default-k8s-diff-port-609438 kubelet[1473]: I1124 14:03:44.596618    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.37725709 podStartE2EDuration="3.596505334s" podCreationTimestamp="2025-11-24 14:03:41 +0000 UTC" firstStartedPulling="2025-11-24 14:03:42.030260153 +0000 UTC m=+50.219883329" lastFinishedPulling="2025-11-24 14:03:44.249508405 +0000 UTC m=+52.439131573" observedRunningTime="2025-11-24 14:03:44.596379466 +0000 UTC m=+52.786002642" watchObservedRunningTime="2025-11-24 14:03:44.596505334 +0000 UTC m=+52.786128518"
	
	
	==> storage-provisioner [00dfaea3cc3d9d681a16a40db39e8b36acc58147c4a4bcba29b9f0947732bc54] <==
	I1124 14:03:39.419369       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 14:03:39.434895       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 14:03:39.435178       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 14:03:39.437646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:39.444147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:03:39.444467       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 14:03:39.444850       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-609438_af2eb735-6513-4ee2-94f5-9fedff14594f!
	I1124 14:03:39.445626       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b412a730-d60a-41a8-bbbf-d1e5b5b11fb8", APIVersion:"v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-609438_af2eb735-6513-4ee2-94f5-9fedff14594f became leader
	W1124 14:03:39.451258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:39.457963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:03:39.545832       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-609438_af2eb735-6513-4ee2-94f5-9fedff14594f!
	W1124 14:03:41.461939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:41.467569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:43.470367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:43.477631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:45.482107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:45.488970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:47.492740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:47.499012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:49.502743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:49.509999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:51.524356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:51.534506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:53.539530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:53.548647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-609438 -n default-k8s-diff-port-609438
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-609438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (13.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (14.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-593634 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f8c75830-451a-4be9-beb5-1131f44fca93] Pending
helpers_test.go:352: "busybox" [f8c75830-451a-4be9-beb5-1131f44fca93] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f8c75830-451a-4be9-beb5-1131f44fca93] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004444481s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-593634 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-593634
helpers_test.go:243: (dbg) docker inspect embed-certs-593634:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6e5dee24e5b06ef4edf06922c23343665480a4e085114101cb004988b20b9fef",
	        "Created": "2025-11-24T14:02:31.673833431Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 214780,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T14:02:31.753778558Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/6e5dee24e5b06ef4edf06922c23343665480a4e085114101cb004988b20b9fef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6e5dee24e5b06ef4edf06922c23343665480a4e085114101cb004988b20b9fef/hostname",
	        "HostsPath": "/var/lib/docker/containers/6e5dee24e5b06ef4edf06922c23343665480a4e085114101cb004988b20b9fef/hosts",
	        "LogPath": "/var/lib/docker/containers/6e5dee24e5b06ef4edf06922c23343665480a4e085114101cb004988b20b9fef/6e5dee24e5b06ef4edf06922c23343665480a4e085114101cb004988b20b9fef-json.log",
	        "Name": "/embed-certs-593634",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-593634:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-593634",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6e5dee24e5b06ef4edf06922c23343665480a4e085114101cb004988b20b9fef",
	                "LowerDir": "/var/lib/docker/overlay2/bfe836a413401a99276c285b2ed8bd202617ff61d94db99f7c8efa134ddc9592-init/diff:/var/lib/docker/overlay2/f206897dad0d7c6b66379aa7c75402ab98ba158a4fc5aedf84eda3d57da10430/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bfe836a413401a99276c285b2ed8bd202617ff61d94db99f7c8efa134ddc9592/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bfe836a413401a99276c285b2ed8bd202617ff61d94db99f7c8efa134ddc9592/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bfe836a413401a99276c285b2ed8bd202617ff61d94db99f7c8efa134ddc9592/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-593634",
	                "Source": "/var/lib/docker/volumes/embed-certs-593634/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-593634",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-593634",
	                "name.minikube.sigs.k8s.io": "embed-certs-593634",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6bd72f29cb118f69e9142e2d1382fba48b6f55fa7d86bdbdd835204321e3acca",
	            "SandboxKey": "/var/run/docker/netns/6bd72f29cb11",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-593634": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:4e:81:77:c4:1f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "26b8c3d8c63cdc00f4fee3f97bf6b2a945c3da49721adc903f246a874d6a2dc0",
	                    "EndpointID": "1cc13b824ec609250d80920f5396576e884feee9a58c7bf52b4aaae6c9212945",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-593634",
	                        "6e5dee24e5b0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-593634 -n embed-certs-593634
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-593634 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-593634 logs -n 25: (1.178416911s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ start   │ -p force-systemd-env-134839 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-134839     │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p kubernetes-upgrade-758885                                                                                                                                                                                                                        │ kubernetes-upgrade-758885    │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p cert-expiration-865605 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-865605       │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:59 UTC │
	│ ssh     │ force-systemd-env-134839 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-134839     │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p force-systemd-env-134839                                                                                                                                                                                                                         │ force-systemd-env-134839     │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ start   │ -p cert-options-440754 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-440754          │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ ssh     │ cert-options-440754 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-440754          │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ ssh     │ -p cert-options-440754 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-440754          │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p cert-options-440754                                                                                                                                                                                                                              │ cert-options-440754          │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ start   │ -p old-k8s-version-318786 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 14:00 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-318786 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │ 24 Nov 25 14:00 UTC │
	│ stop    │ -p old-k8s-version-318786 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │ 24 Nov 25 14:01 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-318786 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ start   │ -p old-k8s-version-318786 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ image   │ old-k8s-version-318786 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:02 UTC │
	│ pause   │ -p old-k8s-version-318786 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:02 UTC │
	│ start   │ -p cert-expiration-865605 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-865605       │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:02 UTC │
	│ unpause │ -p old-k8s-version-318786 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:02 UTC │
	│ delete  │ -p old-k8s-version-318786                                                                                                                                                                                                                           │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:02 UTC │
	│ delete  │ -p old-k8s-version-318786                                                                                                                                                                                                                           │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:02 UTC │
	│ start   │ -p default-k8s-diff-port-609438 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-609438 │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:03 UTC │
	│ delete  │ -p cert-expiration-865605                                                                                                                                                                                                                           │ cert-expiration-865605       │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:02 UTC │
	│ start   │ -p embed-certs-593634 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-593634           │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:03 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-609438 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ default-k8s-diff-port-609438 │ jenkins │ v1.37.0 │ 24 Nov 25 14:03 UTC │ 24 Nov 25 14:03 UTC │
	│ stop    │ -p default-k8s-diff-port-609438 --alsologtostderr -v=3                                                                                                                                                                                              │ default-k8s-diff-port-609438 │ jenkins │ v1.37.0 │ 24 Nov 25 14:03 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:02:25
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:02:25.355768  213570 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:02:25.355897  213570 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:02:25.355929  213570 out.go:374] Setting ErrFile to fd 2...
	I1124 14:02:25.355935  213570 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:02:25.356214  213570 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2368/.minikube/bin
	I1124 14:02:25.356610  213570 out.go:368] Setting JSON to false
	I1124 14:02:25.357458  213570 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6294,"bootTime":1763986651,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1124 14:02:25.357531  213570 start.go:143] virtualization:  
	I1124 14:02:25.363130  213570 out.go:179] * [embed-certs-593634] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 14:02:25.366080  213570 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:02:25.366317  213570 notify.go:221] Checking for updates...
	I1124 14:02:25.371678  213570 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:02:25.374517  213570 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2368/kubeconfig
	I1124 14:02:25.377392  213570 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2368/.minikube
	I1124 14:02:25.380291  213570 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 14:02:25.383233  213570 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:02:25.386803  213570 config.go:182] Loaded profile config "default-k8s-diff-port-609438": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 14:02:25.386988  213570 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:02:25.428466  213570 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 14:02:25.428628  213570 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:02:25.551573  213570 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:55 SystemTime:2025-11-24 14:02:25.537516273 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:02:25.551683  213570 docker.go:319] overlay module found
	I1124 14:02:25.556682  213570 out.go:179] * Using the docker driver based on user configuration
	I1124 14:02:25.559709  213570 start.go:309] selected driver: docker
	I1124 14:02:25.559726  213570 start.go:927] validating driver "docker" against <nil>
	I1124 14:02:25.559738  213570 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:02:25.560805  213570 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:02:25.668193  213570 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-24 14:02:25.655788801 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:02:25.668344  213570 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 14:02:25.668552  213570 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:02:25.671717  213570 out.go:179] * Using Docker driver with root privileges
	I1124 14:02:25.674536  213570 cni.go:84] Creating CNI manager for ""
	I1124 14:02:25.674610  213570 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 14:02:25.674621  213570 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 14:02:25.674693  213570 start.go:353] cluster config:
	{Name:embed-certs-593634 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-593634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:02:25.677759  213570 out.go:179] * Starting "embed-certs-593634" primary control-plane node in "embed-certs-593634" cluster
	I1124 14:02:25.680596  213570 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 14:02:25.683549  213570 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 14:02:25.686518  213570 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 14:02:25.686579  213570 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1124 14:02:25.686594  213570 cache.go:65] Caching tarball of preloaded images
	I1124 14:02:25.686679  213570 preload.go:238] Found /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1124 14:02:25.686689  213570 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1124 14:02:25.686792  213570 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/config.json ...
	I1124 14:02:25.686808  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/config.json: {Name:mkcf0b417a9473ceb4b66956bfa520a43f4ebbeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:25.686945  213570 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 14:02:25.710900  213570 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 14:02:25.710919  213570 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 14:02:25.710933  213570 cache.go:240] Successfully downloaded all kic artifacts
	I1124 14:02:25.710962  213570 start.go:360] acquireMachinesLock for embed-certs-593634: {Name:mk435fa1f228450b1765e3435053e751c40a1834 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:02:25.711053  213570 start.go:364] duration metric: took 77.449µs to acquireMachinesLock for "embed-certs-593634"
	I1124 14:02:25.711077  213570 start.go:93] Provisioning new machine with config: &{Name:embed-certs-593634 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-593634 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 14:02:25.711153  213570 start.go:125] createHost starting for "" (driver="docker")
	I1124 14:02:23.909747  212383 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-609438 --name default-k8s-diff-port-609438 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-609438 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-609438 --network default-k8s-diff-port-609438 --ip 192.168.85.2 --volume default-k8s-diff-port-609438:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 14:02:24.307279  212383 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-609438 --format={{.State.Running}}
	I1124 14:02:24.327311  212383 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-609438 --format={{.State.Status}}
	I1124 14:02:24.369313  212383 cli_runner.go:164] Run: docker exec default-k8s-diff-port-609438 stat /var/lib/dpkg/alternatives/iptables
	I1124 14:02:24.459655  212383 oci.go:144] the created container "default-k8s-diff-port-609438" has a running status.
	I1124 14:02:24.459682  212383 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa...
	I1124 14:02:24.627125  212383 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 14:02:24.888609  212383 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-609438 --format={{.State.Status}}
	I1124 14:02:24.933748  212383 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 14:02:24.933772  212383 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-609438 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 14:02:25.043026  212383 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-609438 --format={{.State.Status}}
	I1124 14:02:25.089321  212383 machine.go:94] provisionDockerMachine start ...
	I1124 14:02:25.089431  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:25.153799  212383 main.go:143] libmachine: Using SSH client type: native
	I1124 14:02:25.154239  212383 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1124 14:02:25.154258  212383 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:02:25.461029  212383 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-609438
	
	I1124 14:02:25.461072  212383 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-609438"
	I1124 14:02:25.461152  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:25.543103  212383 main.go:143] libmachine: Using SSH client type: native
	I1124 14:02:25.543625  212383 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1124 14:02:25.543643  212383 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-609438 && echo "default-k8s-diff-port-609438" | sudo tee /etc/hostname
	I1124 14:02:25.773225  212383 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-609438
	
	I1124 14:02:25.773297  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:25.800013  212383 main.go:143] libmachine: Using SSH client type: native
	I1124 14:02:25.801080  212383 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1124 14:02:25.801108  212383 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-609438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-609438/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-609438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:02:26.006217  212383 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:02:26.006244  212383 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-2368/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-2368/.minikube}
	I1124 14:02:26.006263  212383 ubuntu.go:190] setting up certificates
	I1124 14:02:26.006272  212383 provision.go:84] configureAuth start
	I1124 14:02:26.006350  212383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-609438
	I1124 14:02:26.026909  212383 provision.go:143] copyHostCerts
	I1124 14:02:26.026970  212383 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem, removing ...
	I1124 14:02:26.026980  212383 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem
	I1124 14:02:26.027046  212383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem (1082 bytes)
	I1124 14:02:26.027134  212383 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem, removing ...
	I1124 14:02:26.027140  212383 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem
	I1124 14:02:26.027166  212383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem (1123 bytes)
	I1124 14:02:26.027243  212383 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem, removing ...
	I1124 14:02:26.027248  212383 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem
	I1124 14:02:26.027271  212383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem (1679 bytes)
	I1124 14:02:26.027316  212383 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-609438 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-609438 localhost minikube]
	I1124 14:02:26.479334  212383 provision.go:177] copyRemoteCerts
	I1124 14:02:26.479453  212383 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:02:26.479529  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:26.509970  212383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa Username:docker}
	I1124 14:02:26.633721  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 14:02:26.665930  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 14:02:26.697677  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1124 14:02:26.732905  212383 provision.go:87] duration metric: took 726.609261ms to configureAuth
	I1124 14:02:26.732938  212383 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:02:26.733137  212383 config.go:182] Loaded profile config "default-k8s-diff-port-609438": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 14:02:26.733153  212383 machine.go:97] duration metric: took 1.643811371s to provisionDockerMachine
	I1124 14:02:26.733161  212383 client.go:176] duration metric: took 7.487822203s to LocalClient.Create
	I1124 14:02:26.733175  212383 start.go:167] duration metric: took 7.487885367s to libmachine.API.Create "default-k8s-diff-port-609438"
	I1124 14:02:26.733189  212383 start.go:293] postStartSetup for "default-k8s-diff-port-609438" (driver="docker")
	I1124 14:02:26.733198  212383 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:02:26.733271  212383 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:02:26.733323  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:26.763570  212383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa Username:docker}
	I1124 14:02:26.897119  212383 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:02:26.901182  212383 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:02:26.901211  212383 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:02:26.901223  212383 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2368/.minikube/addons for local assets ...
	I1124 14:02:26.901281  212383 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2368/.minikube/files for local assets ...
	I1124 14:02:26.901360  212383 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem -> 41782.pem in /etc/ssl/certs
	I1124 14:02:26.901463  212383 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:02:26.909763  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem --> /etc/ssl/certs/41782.pem (1708 bytes)
	I1124 14:02:26.930128  212383 start.go:296] duration metric: took 196.924439ms for postStartSetup
	I1124 14:02:26.930508  212383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-609438
	I1124 14:02:26.950744  212383 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/config.json ...
	I1124 14:02:26.951035  212383 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 14:02:26.951091  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:26.973535  212383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa Username:docker}
	I1124 14:02:27.077778  212383 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 14:02:27.083066  212383 start.go:128] duration metric: took 7.841363739s to createHost
	I1124 14:02:27.083089  212383 start.go:83] releasing machines lock for "default-k8s-diff-port-609438", held for 7.84148292s
	I1124 14:02:27.083163  212383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-609438
	I1124 14:02:27.105539  212383 ssh_runner.go:195] Run: cat /version.json
	I1124 14:02:27.105585  212383 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:02:27.105661  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:27.105589  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:27.149461  212383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa Username:docker}
	I1124 14:02:27.157732  212383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa Username:docker}
	I1124 14:02:27.367320  212383 ssh_runner.go:195] Run: systemctl --version
	I1124 14:02:27.374447  212383 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:02:27.380473  212383 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:02:27.380647  212383 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:02:27.413935  212383 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 14:02:27.414007  212383 start.go:496] detecting cgroup driver to use...
	I1124 14:02:27.414056  212383 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 14:02:27.414133  212383 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 14:02:27.430159  212383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 14:02:27.444285  212383 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:02:27.444392  212383 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:02:27.461944  212383 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:02:27.481645  212383 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:02:27.639351  212383 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:02:27.799286  212383 docker.go:234] disabling docker service ...
	I1124 14:02:27.799350  212383 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:02:27.831375  212383 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:02:27.845484  212383 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:02:27.983498  212383 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:02:28.133537  212383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:02:28.150716  212383 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:02:28.166057  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 14:02:28.175128  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 14:02:28.184145  212383 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1124 14:02:28.184265  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1124 14:02:28.192987  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 14:02:28.202626  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 14:02:28.211553  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 14:02:28.220020  212383 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:02:28.228018  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 14:02:28.236891  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 14:02:28.245507  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 14:02:28.254226  212383 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:02:28.262068  212383 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:02:28.269803  212383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:02:28.442896  212383 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 14:02:28.596361  212383 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 14:02:28.596444  212383 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 14:02:28.602936  212383 start.go:564] Will wait 60s for crictl version
	I1124 14:02:28.603014  212383 ssh_runner.go:195] Run: which crictl
	I1124 14:02:28.607012  212383 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 14:02:28.645174  212383 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 14:02:28.645247  212383 ssh_runner.go:195] Run: containerd --version
	I1124 14:02:28.669934  212383 ssh_runner.go:195] Run: containerd --version
	I1124 14:02:28.700929  212383 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1124 14:02:28.704729  212383 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-609438 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:02:28.734893  212383 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 14:02:28.738862  212383 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:02:28.749508  212383 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-609438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-609438 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:02:28.749613  212383 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 14:02:28.749681  212383 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:02:28.782633  212383 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 14:02:28.782654  212383 containerd.go:534] Images already preloaded, skipping extraction
	I1124 14:02:28.782711  212383 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:02:28.839126  212383 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 14:02:28.839147  212383 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:02:28.839155  212383 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 containerd true true} ...
	I1124 14:02:28.839244  212383 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-609438 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-609438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 14:02:28.839314  212383 ssh_runner.go:195] Run: sudo crictl info
	I1124 14:02:28.874904  212383 cni.go:84] Creating CNI manager for ""
	I1124 14:02:28.874924  212383 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 14:02:28.874940  212383 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 14:02:28.874963  212383 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-609438 NodeName:default-k8s-diff-port-609438 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:02:28.875085  212383 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-609438"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:02:28.875154  212383 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 14:02:28.884597  212383 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:02:28.884669  212383 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:02:25.714459  213570 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 14:02:25.714725  213570 start.go:159] libmachine.API.Create for "embed-certs-593634" (driver="docker")
	I1124 14:02:25.714819  213570 client.go:173] LocalClient.Create starting
	I1124 14:02:25.714954  213570 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem
	I1124 14:02:25.715008  213570 main.go:143] libmachine: Decoding PEM data...
	I1124 14:02:25.715051  213570 main.go:143] libmachine: Parsing certificate...
	I1124 14:02:25.715148  213570 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem
	I1124 14:02:25.715206  213570 main.go:143] libmachine: Decoding PEM data...
	I1124 14:02:25.715261  213570 main.go:143] libmachine: Parsing certificate...
	I1124 14:02:25.715745  213570 cli_runner.go:164] Run: docker network inspect embed-certs-593634 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 14:02:25.736780  213570 cli_runner.go:211] docker network inspect embed-certs-593634 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 14:02:25.736871  213570 network_create.go:284] running [docker network inspect embed-certs-593634] to gather additional debugging logs...
	I1124 14:02:25.736888  213570 cli_runner.go:164] Run: docker network inspect embed-certs-593634
	W1124 14:02:25.769114  213570 cli_runner.go:211] docker network inspect embed-certs-593634 returned with exit code 1
	I1124 14:02:25.769141  213570 network_create.go:287] error running [docker network inspect embed-certs-593634]: docker network inspect embed-certs-593634: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-593634 not found
	I1124 14:02:25.769154  213570 network_create.go:289] output of [docker network inspect embed-certs-593634]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-593634 not found
	
	** /stderr **
	I1124 14:02:25.769257  213570 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:02:25.800766  213570 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e5e15b13860d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:3d:37:c4:cc:77} reservation:<nil>}
	I1124 14:02:25.801103  213570 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-66593a990bce IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:c0:9b:bc:41:ca} reservation:<nil>}
	I1124 14:02:25.801995  213570 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-37e9fb0954cf IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:32:0b:6f:6e:b2:8c} reservation:<nil>}
	I1124 14:02:25.802424  213570 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e9170}
	I1124 14:02:25.802442  213570 network_create.go:124] attempt to create docker network embed-certs-593634 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1124 14:02:25.802493  213570 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-593634 embed-certs-593634
	I1124 14:02:25.881093  213570 network_create.go:108] docker network embed-certs-593634 192.168.76.0/24 created
	I1124 14:02:25.881122  213570 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-593634" container
	I1124 14:02:25.881203  213570 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 14:02:25.903081  213570 cli_runner.go:164] Run: docker volume create embed-certs-593634 --label name.minikube.sigs.k8s.io=embed-certs-593634 --label created_by.minikube.sigs.k8s.io=true
	I1124 14:02:25.931462  213570 oci.go:103] Successfully created a docker volume embed-certs-593634
	I1124 14:02:25.931542  213570 cli_runner.go:164] Run: docker run --rm --name embed-certs-593634-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-593634 --entrypoint /usr/bin/test -v embed-certs-593634:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 14:02:26.581166  213570 oci.go:107] Successfully prepared a docker volume embed-certs-593634
	I1124 14:02:26.581232  213570 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 14:02:26.581244  213570 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 14:02:26.581311  213570 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-593634:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 14:02:28.894421  212383 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1124 14:02:28.909480  212383 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:02:28.924519  212383 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2241 bytes)
	I1124 14:02:28.939585  212383 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 14:02:28.943813  212383 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:02:28.954534  212383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:02:29.104027  212383 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:02:29.125453  212383 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438 for IP: 192.168.85.2
	I1124 14:02:29.125476  212383 certs.go:195] generating shared ca certs ...
	I1124 14:02:29.125503  212383 certs.go:227] acquiring lock for ca certs: {Name:mkcd8707c782acde0e57168c044a3df942dc4ab1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:29.125641  212383 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-2368/.minikube/ca.key
	I1124 14:02:29.125695  212383 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.key
	I1124 14:02:29.125707  212383 certs.go:257] generating profile certs ...
	I1124 14:02:29.125768  212383 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/client.key
	I1124 14:02:29.125789  212383 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/client.crt with IP's: []
	I1124 14:02:29.324459  212383 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/client.crt ...
	I1124 14:02:29.324491  212383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/client.crt: {Name:mk8aada29dd487d5091685276369440b7d624321 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:29.324640  212383 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/client.key ...
	I1124 14:02:29.324656  212383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/client.key: {Name:mka039edce6f440d55864b8259b2b6e6a4166f35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:29.324742  212383 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.key.0b070d75
	I1124 14:02:29.324762  212383 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.crt.0b070d75 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 14:02:29.388053  212383 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.crt.0b070d75 ...
	I1124 14:02:29.388089  212383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.crt.0b070d75: {Name:mk8c33f3dd28832381eccdbc39352bbcf3fad513 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:29.388234  212383 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.key.0b070d75 ...
	I1124 14:02:29.388250  212383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.key.0b070d75: {Name:mk1a2d7229ced6b28d71658195699ecc4e6d6cbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:29.388323  212383 certs.go:382] copying /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.crt.0b070d75 -> /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.crt
	I1124 14:02:29.388407  212383 certs.go:386] copying /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.key.0b070d75 -> /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.key
	I1124 14:02:29.388467  212383 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.key
	I1124 14:02:29.388494  212383 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.crt with IP's: []
	I1124 14:02:29.607942  212383 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.crt ...
	I1124 14:02:29.607978  212383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.crt: {Name:mkf0227a8560a7238360c53d12e60293f9779f1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:29.608133  212383 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.key ...
	I1124 14:02:29.608148  212383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.key: {Name:mkdb69944b7ff660a91a53e6ae6208e817233479 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:29.608326  212383 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178.pem (1338 bytes)
	W1124 14:02:29.608368  212383 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178_empty.pem, impossibly tiny 0 bytes
	I1124 14:02:29.608383  212383 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 14:02:29.608412  212383 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem (1082 bytes)
	I1124 14:02:29.608442  212383 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:02:29.608468  212383 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem (1679 bytes)
	I1124 14:02:29.608515  212383 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem (1708 bytes)
	I1124 14:02:29.609076  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:02:29.626013  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 14:02:29.643798  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:02:29.661375  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 14:02:29.679743  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1124 14:02:29.696528  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 14:02:29.728013  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:02:29.773516  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 14:02:29.805187  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178.pem --> /usr/share/ca-certificates/4178.pem (1338 bytes)
	I1124 14:02:29.826865  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem --> /usr/share/ca-certificates/41782.pem (1708 bytes)
	I1124 14:02:29.847529  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:02:29.867886  212383 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:02:29.882919  212383 ssh_runner.go:195] Run: openssl version
	I1124 14:02:29.889477  212383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41782.pem && ln -fs /usr/share/ca-certificates/41782.pem /etc/ssl/certs/41782.pem"
	I1124 14:02:29.898302  212383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41782.pem
	I1124 14:02:29.904667  212383 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:21 /usr/share/ca-certificates/41782.pem
	I1124 14:02:29.904736  212383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41782.pem
	I1124 14:02:29.948420  212383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:02:29.957558  212383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:02:29.966733  212383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:02:29.970899  212383 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:02:29.970989  212383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:02:30.019996  212383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:02:30.030890  212383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4178.pem && ln -fs /usr/share/ca-certificates/4178.pem /etc/ssl/certs/4178.pem"
	I1124 14:02:30.057890  212383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4178.pem
	I1124 14:02:30.080661  212383 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:21 /usr/share/ca-certificates/4178.pem
	I1124 14:02:30.080813  212383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4178.pem
	I1124 14:02:30.155115  212383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4178.pem /etc/ssl/certs/51391683.0"
	I1124 14:02:30.165475  212383 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:02:30.170978  212383 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 14:02:30.171035  212383 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-609438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-609438 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:02:30.171124  212383 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 14:02:30.171192  212383 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:02:30.211462  212383 cri.go:89] found id: ""
	I1124 14:02:30.211552  212383 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:02:30.226907  212383 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 14:02:30.236649  212383 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 14:02:30.236720  212383 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 14:02:30.248370  212383 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 14:02:30.248462  212383 kubeadm.go:158] found existing configuration files:
	
	I1124 14:02:30.248548  212383 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1124 14:02:30.262084  212383 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 14:02:30.262152  212383 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 14:02:30.270330  212383 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1124 14:02:30.279476  212383 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 14:02:30.279543  212383 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 14:02:30.288703  212383 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1124 14:02:30.297950  212383 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 14:02:30.298023  212383 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 14:02:30.310718  212383 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1124 14:02:30.320531  212383 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 14:02:30.320603  212383 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 14:02:30.329639  212383 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 14:02:30.406424  212383 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 14:02:30.406661  212383 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 14:02:30.479025  212383 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 14:02:31.562417  213570 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-593634:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.981062358s)
	I1124 14:02:31.562447  213570 kic.go:203] duration metric: took 4.981201018s to extract preloaded images to volume ...
	W1124 14:02:31.562585  213570 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 14:02:31.562696  213570 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 14:02:31.653956  213570 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-593634 --name embed-certs-593634 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-593634 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-593634 --network embed-certs-593634 --ip 192.168.76.2 --volume embed-certs-593634:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 14:02:32.104099  213570 cli_runner.go:164] Run: docker container inspect embed-certs-593634 --format={{.State.Running}}
	I1124 14:02:32.133617  213570 cli_runner.go:164] Run: docker container inspect embed-certs-593634 --format={{.State.Status}}
	I1124 14:02:32.170125  213570 cli_runner.go:164] Run: docker exec embed-certs-593634 stat /var/lib/dpkg/alternatives/iptables
	I1124 14:02:32.243591  213570 oci.go:144] the created container "embed-certs-593634" has a running status.
	I1124 14:02:32.243619  213570 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa...
	I1124 14:02:33.008353  213570 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 14:02:33.030437  213570 cli_runner.go:164] Run: docker container inspect embed-certs-593634 --format={{.State.Status}}
	I1124 14:02:33.051118  213570 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 14:02:33.051142  213570 kic_runner.go:114] Args: [docker exec --privileged embed-certs-593634 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 14:02:33.146272  213570 cli_runner.go:164] Run: docker container inspect embed-certs-593634 --format={{.State.Status}}
	I1124 14:02:33.172981  213570 machine.go:94] provisionDockerMachine start ...
	I1124 14:02:33.173175  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:33.203273  213570 main.go:143] libmachine: Using SSH client type: native
	I1124 14:02:33.203611  213570 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1124 14:02:33.203620  213570 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:02:33.204370  213570 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 14:02:36.376430  213570 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-593634
	
	I1124 14:02:36.376458  213570 ubuntu.go:182] provisioning hostname "embed-certs-593634"
	I1124 14:02:36.376538  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:36.401139  213570 main.go:143] libmachine: Using SSH client type: native
	I1124 14:02:36.401453  213570 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1124 14:02:36.401469  213570 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-593634 && echo "embed-certs-593634" | sudo tee /etc/hostname
	I1124 14:02:36.589650  213570 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-593634
	
	I1124 14:02:36.589799  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:36.618006  213570 main.go:143] libmachine: Using SSH client type: native
	I1124 14:02:36.618310  213570 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1124 14:02:36.618326  213570 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-593634' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-593634/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-593634' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:02:36.779947  213570 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:02:36.780024  213570 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-2368/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-2368/.minikube}
	I1124 14:02:36.780065  213570 ubuntu.go:190] setting up certificates
	I1124 14:02:36.780107  213570 provision.go:84] configureAuth start
	I1124 14:02:36.780202  213570 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-593634
	I1124 14:02:36.805555  213570 provision.go:143] copyHostCerts
	I1124 14:02:36.805621  213570 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem, removing ...
	I1124 14:02:36.805629  213570 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem
	I1124 14:02:36.805706  213570 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem (1082 bytes)
	I1124 14:02:36.805804  213570 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem, removing ...
	I1124 14:02:36.805809  213570 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem
	I1124 14:02:36.805834  213570 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem (1123 bytes)
	I1124 14:02:36.805881  213570 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem, removing ...
	I1124 14:02:36.805885  213570 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem
	I1124 14:02:36.805907  213570 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem (1679 bytes)
	I1124 14:02:36.805955  213570 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem org=jenkins.embed-certs-593634 san=[127.0.0.1 192.168.76.2 embed-certs-593634 localhost minikube]
	I1124 14:02:37.074442  213570 provision.go:177] copyRemoteCerts
	I1124 14:02:37.074519  213570 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:02:37.074565  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:37.105113  213570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa Username:docker}
	I1124 14:02:37.228963  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 14:02:37.249359  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 14:02:37.269580  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1124 14:02:37.289369  213570 provision.go:87] duration metric: took 509.223197ms to configureAuth
	I1124 14:02:37.289401  213570 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:02:37.289587  213570 config.go:182] Loaded profile config "embed-certs-593634": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 14:02:37.289602  213570 machine.go:97] duration metric: took 4.11660352s to provisionDockerMachine
	I1124 14:02:37.289609  213570 client.go:176] duration metric: took 11.57476669s to LocalClient.Create
	I1124 14:02:37.289629  213570 start.go:167] duration metric: took 11.574903397s to libmachine.API.Create "embed-certs-593634"
	I1124 14:02:37.289636  213570 start.go:293] postStartSetup for "embed-certs-593634" (driver="docker")
	I1124 14:02:37.289644  213570 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:02:37.289700  213570 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:02:37.289746  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:37.313497  213570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa Username:docker}
	I1124 14:02:37.421261  213570 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:02:37.425376  213570 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:02:37.425402  213570 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:02:37.425413  213570 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2368/.minikube/addons for local assets ...
	I1124 14:02:37.425467  213570 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2368/.minikube/files for local assets ...
	I1124 14:02:37.425546  213570 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem -> 41782.pem in /etc/ssl/certs
	I1124 14:02:37.425648  213570 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:02:37.434170  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem --> /etc/ssl/certs/41782.pem (1708 bytes)
	I1124 14:02:37.454297  213570 start.go:296] duration metric: took 164.646825ms for postStartSetup
	I1124 14:02:37.454768  213570 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-593634
	I1124 14:02:37.473090  213570 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/config.json ...
	I1124 14:02:37.473375  213570 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 14:02:37.473419  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:37.492467  213570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa Username:docker}
	I1124 14:02:37.597996  213570 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 14:02:37.603374  213570 start.go:128] duration metric: took 11.892207017s to createHost
	I1124 14:02:37.603402  213570 start.go:83] releasing machines lock for "embed-certs-593634", held for 11.892340336s
	I1124 14:02:37.603491  213570 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-593634
	I1124 14:02:37.622681  213570 ssh_runner.go:195] Run: cat /version.json
	I1124 14:02:37.622739  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:37.622988  213570 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:02:37.623049  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:37.653121  213570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa Username:docker}
	I1124 14:02:37.661266  213570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa Username:docker}
	I1124 14:02:37.867529  213570 ssh_runner.go:195] Run: systemctl --version
	I1124 14:02:37.880289  213570 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:02:37.885513  213570 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:02:37.885586  213570 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:02:37.919967  213570 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 14:02:37.920041  213570 start.go:496] detecting cgroup driver to use...
	I1124 14:02:37.920090  213570 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 14:02:37.920196  213570 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 14:02:37.939855  213570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 14:02:37.954765  213570 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:02:37.954832  213570 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:02:37.973211  213570 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:02:37.993531  213570 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:02:38.152217  213570 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:02:38.315244  213570 docker.go:234] disabling docker service ...
	I1124 14:02:38.315315  213570 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:02:38.342606  213570 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:02:38.357435  213570 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:02:38.501143  213570 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:02:38.653968  213570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:02:38.670062  213570 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:02:38.691612  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 14:02:38.701736  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 14:02:38.711955  213570 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1124 14:02:38.712108  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1124 14:02:38.722429  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 14:02:38.732416  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 14:02:38.742370  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 14:02:38.752386  213570 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:02:38.761548  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 14:02:38.771322  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 14:02:38.781079  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 14:02:38.790804  213570 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:02:38.799605  213570 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:02:38.808384  213570 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:02:38.957014  213570 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 14:02:39.134468  213570 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 14:02:39.134589  213570 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 14:02:39.138612  213570 start.go:564] Will wait 60s for crictl version
	I1124 14:02:39.138728  213570 ssh_runner.go:195] Run: which crictl
	I1124 14:02:39.142835  213570 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 14:02:39.183049  213570 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 14:02:39.183127  213570 ssh_runner.go:195] Run: containerd --version
	I1124 14:02:39.209644  213570 ssh_runner.go:195] Run: containerd --version
	I1124 14:02:39.242563  213570 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1124 14:02:39.245632  213570 cli_runner.go:164] Run: docker network inspect embed-certs-593634 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:02:39.261116  213570 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 14:02:39.265349  213570 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:02:39.275060  213570 kubeadm.go:884] updating cluster {Name:embed-certs-593634 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-593634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:02:39.275179  213570 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 14:02:39.275240  213570 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:02:39.309584  213570 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 14:02:39.309604  213570 containerd.go:534] Images already preloaded, skipping extraction
	I1124 14:02:39.309666  213570 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:02:39.338298  213570 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 14:02:39.338369  213570 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:02:39.338391  213570 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1124 14:02:39.338540  213570 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-593634 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-593634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 14:02:39.338638  213570 ssh_runner.go:195] Run: sudo crictl info
	I1124 14:02:39.374509  213570 cni.go:84] Creating CNI manager for ""
	I1124 14:02:39.374529  213570 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 14:02:39.374546  213570 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 14:02:39.374567  213570 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-593634 NodeName:embed-certs-593634 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:02:39.374695  213570 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-593634"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:02:39.374758  213570 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 14:02:39.383722  213570 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:02:39.383790  213570 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:02:39.392664  213570 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1124 14:02:39.407366  213570 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:02:39.421539  213570 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1124 14:02:39.435750  213570 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 14:02:39.439949  213570 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:02:39.450067  213570 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:02:39.594389  213570 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:02:39.612637  213570 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634 for IP: 192.168.76.2
	I1124 14:02:39.612654  213570 certs.go:195] generating shared ca certs ...
	I1124 14:02:39.612670  213570 certs.go:227] acquiring lock for ca certs: {Name:mkcd8707c782acde0e57168c044a3df942dc4ab1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:39.612812  213570 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-2368/.minikube/ca.key
	I1124 14:02:39.612861  213570 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.key
	I1124 14:02:39.612868  213570 certs.go:257] generating profile certs ...
	I1124 14:02:39.612921  213570 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/client.key
	I1124 14:02:39.612933  213570 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/client.crt with IP's: []
	I1124 14:02:39.743608  213570 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/client.crt ...
	I1124 14:02:39.743688  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/client.crt: {Name:mkdc127047d7bba99c4ff0de010fa76eaa96351a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:39.743978  213570 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/client.key ...
	I1124 14:02:39.744016  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/client.key: {Name:mk5b65ad154f9ff1864bd2678d53c0d49d42b626 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:39.744181  213570 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.key.20c14e55
	I1124 14:02:39.744223  213570 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.crt.20c14e55 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 14:02:39.792416  213570 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.crt.20c14e55 ...
	I1124 14:02:39.792488  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.crt.20c14e55: {Name:mk898939d3f887dee7ec2cb55d4f9f3c1473f371 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:39.792715  213570 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.key.20c14e55 ...
	I1124 14:02:39.792751  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.key.20c14e55: {Name:mk7634950b7d8fc2f57ae8ad6d2b71e2a24db521 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:39.792893  213570 certs.go:382] copying /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.crt.20c14e55 -> /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.crt
	I1124 14:02:39.793035  213570 certs.go:386] copying /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.key.20c14e55 -> /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.key
	I1124 14:02:39.793197  213570 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.key
	I1124 14:02:39.793218  213570 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.crt with IP's: []
	I1124 14:02:40.512550  213570 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.crt ...
	I1124 14:02:40.512590  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.crt: {Name:mk7e59e3c705bb60e30918ea8dec355fb87a4cdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:40.512783  213570 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.key ...
	I1124 14:02:40.512800  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.key: {Name:mk1c28b0bf985e63e205a9d607bdda54b666c8d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:40.512994  213570 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178.pem (1338 bytes)
	W1124 14:02:40.513046  213570 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178_empty.pem, impossibly tiny 0 bytes
	I1124 14:02:40.513055  213570 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 14:02:40.513084  213570 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem (1082 bytes)
	I1124 14:02:40.513116  213570 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:02:40.513155  213570 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem (1679 bytes)
	I1124 14:02:40.513205  213570 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem (1708 bytes)
	I1124 14:02:40.513807  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:02:40.534476  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 14:02:40.554772  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:02:40.573041  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 14:02:40.592563  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 14:02:40.610272  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 14:02:40.648106  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:02:40.675421  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 14:02:40.712861  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:02:40.741274  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178.pem --> /usr/share/ca-certificates/4178.pem (1338 bytes)
	I1124 14:02:40.775540  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem --> /usr/share/ca-certificates/41782.pem (1708 bytes)
	I1124 14:02:40.810151  213570 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:02:40.834734  213570 ssh_runner.go:195] Run: openssl version
	I1124 14:02:40.841134  213570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4178.pem && ln -fs /usr/share/ca-certificates/4178.pem /etc/ssl/certs/4178.pem"
	I1124 14:02:40.853029  213570 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4178.pem
	I1124 14:02:40.860558  213570 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:21 /usr/share/ca-certificates/4178.pem
	I1124 14:02:40.860626  213570 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4178.pem
	I1124 14:02:40.918401  213570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4178.pem /etc/ssl/certs/51391683.0"
	I1124 14:02:40.928700  213570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41782.pem && ln -fs /usr/share/ca-certificates/41782.pem /etc/ssl/certs/41782.pem"
	I1124 14:02:40.943881  213570 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41782.pem
	I1124 14:02:40.948767  213570 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:21 /usr/share/ca-certificates/41782.pem
	I1124 14:02:40.948833  213570 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41782.pem
	I1124 14:02:41.014703  213570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:02:41.026160  213570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:02:41.039512  213570 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:02:41.046666  213570 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:02:41.046734  213570 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:02:41.111180  213570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:02:41.121762  213570 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:02:41.128022  213570 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 14:02:41.128075  213570 kubeadm.go:401] StartCluster: {Name:embed-certs-593634 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-593634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:02:41.128164  213570 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 14:02:41.128228  213570 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:02:41.181954  213570 cri.go:89] found id: ""
	I1124 14:02:41.182043  213570 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:02:41.192535  213570 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 14:02:41.201483  213570 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 14:02:41.201548  213570 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 14:02:41.210919  213570 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 14:02:41.210940  213570 kubeadm.go:158] found existing configuration files:
	
	I1124 14:02:41.210999  213570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 14:02:41.223268  213570 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 14:02:41.223332  213570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 14:02:41.239377  213570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 14:02:41.251095  213570 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 14:02:41.251165  213570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 14:02:41.259252  213570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 14:02:41.268559  213570 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 14:02:41.268620  213570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 14:02:41.282438  213570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 14:02:41.293894  213570 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 14:02:41.293975  213570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 14:02:41.321578  213570 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 14:02:41.440101  213570 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 14:02:41.445250  213570 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 14:02:41.492866  213570 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 14:02:41.499280  213570 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 14:02:41.499334  213570 kubeadm.go:319] OS: Linux
	I1124 14:02:41.499382  213570 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 14:02:41.499444  213570 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 14:02:41.499504  213570 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 14:02:41.499557  213570 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 14:02:41.499612  213570 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 14:02:41.499666  213570 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 14:02:41.499716  213570 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 14:02:41.499769  213570 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 14:02:41.499820  213570 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 14:02:41.625341  213570 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 14:02:41.625456  213570 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 14:02:41.625558  213570 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 14:02:41.636268  213570 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 14:02:41.641768  213570 out.go:252]   - Generating certificates and keys ...
	I1124 14:02:41.641865  213570 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 14:02:41.641939  213570 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 14:02:42.619223  213570 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 14:02:43.011953  213570 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 14:02:43.483393  213570 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 14:02:43.810126  213570 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 14:02:44.825951  213570 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 14:02:44.828294  213570 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-593634 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 14:02:45.647118  213570 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 14:02:45.647643  213570 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-593634 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 14:02:45.905141  213570 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 14:02:46.000202  213570 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 14:02:46.120215  213570 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 14:02:46.120734  213570 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 14:02:46.900838  213570 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 14:02:47.805102  213570 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 14:02:48.517833  213570 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 14:02:49.348256  213570 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 14:02:49.516941  213570 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 14:02:49.518037  213570 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 14:02:49.520983  213570 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 14:02:49.523689  213570 out.go:252]   - Booting up control plane ...
	I1124 14:02:49.523845  213570 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 14:02:49.523973  213570 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 14:02:49.525837  213570 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 14:02:49.554261  213570 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 14:02:49.554370  213570 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 14:02:49.565946  213570 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 14:02:49.567436  213570 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 14:02:49.571311  213570 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 14:02:49.806053  213570 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 14:02:49.806172  213570 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 14:02:52.457159  212383 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 14:02:52.457215  212383 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 14:02:52.457303  212383 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 14:02:52.457359  212383 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 14:02:52.457393  212383 kubeadm.go:319] OS: Linux
	I1124 14:02:52.457438  212383 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 14:02:52.457486  212383 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 14:02:52.457532  212383 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 14:02:52.457580  212383 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 14:02:52.457628  212383 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 14:02:52.457682  212383 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 14:02:52.457728  212383 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 14:02:52.457775  212383 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 14:02:52.457821  212383 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 14:02:52.457893  212383 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 14:02:52.457987  212383 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 14:02:52.458077  212383 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 14:02:52.458138  212383 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 14:02:52.461386  212383 out.go:252]   - Generating certificates and keys ...
	I1124 14:02:52.461491  212383 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 14:02:52.461556  212383 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 14:02:52.461623  212383 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 14:02:52.461680  212383 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 14:02:52.461741  212383 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 14:02:52.461791  212383 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 14:02:52.461845  212383 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 14:02:52.461977  212383 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-609438 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 14:02:52.462028  212383 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 14:02:52.462157  212383 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-609438 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 14:02:52.462223  212383 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 14:02:52.462287  212383 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 14:02:52.462339  212383 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 14:02:52.462402  212383 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 14:02:52.462458  212383 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 14:02:52.462521  212383 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 14:02:52.462611  212383 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 14:02:52.462674  212383 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 14:02:52.462729  212383 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 14:02:52.462820  212383 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 14:02:52.462893  212383 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 14:02:52.465845  212383 out.go:252]   - Booting up control plane ...
	I1124 14:02:52.466035  212383 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 14:02:52.466163  212383 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 14:02:52.466242  212383 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 14:02:52.466364  212383 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 14:02:52.466465  212383 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 14:02:52.466577  212383 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 14:02:52.466668  212383 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 14:02:52.466709  212383 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 14:02:52.466848  212383 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 14:02:52.466960  212383 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 14:02:52.467024  212383 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.018392479s
	I1124 14:02:52.467123  212383 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 14:02:52.467209  212383 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1124 14:02:52.467305  212383 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 14:02:52.467389  212383 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 14:02:52.467470  212383 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.741501846s
	I1124 14:02:52.467552  212383 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.503243598s
	I1124 14:02:52.467627  212383 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 10.824874472s
	I1124 14:02:52.467741  212383 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 14:02:52.467875  212383 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 14:02:52.467955  212383 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 14:02:52.468176  212383 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-609438 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 14:02:52.468237  212383 kubeadm.go:319] [bootstrap-token] Using token: vzq4ay.serxkml6gk1378wv
	I1124 14:02:52.471358  212383 out.go:252]   - Configuring RBAC rules ...
	I1124 14:02:52.471499  212383 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 14:02:52.471591  212383 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 14:02:52.471743  212383 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 14:02:52.471880  212383 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 14:02:52.472017  212383 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 14:02:52.472112  212383 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 14:02:52.472236  212383 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 14:02:52.472282  212383 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 14:02:52.472331  212383 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 14:02:52.472335  212383 kubeadm.go:319] 
	I1124 14:02:52.472400  212383 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 14:02:52.472411  212383 kubeadm.go:319] 
	I1124 14:02:52.472495  212383 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 14:02:52.472499  212383 kubeadm.go:319] 
	I1124 14:02:52.472526  212383 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 14:02:52.472589  212383 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 14:02:52.472643  212383 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 14:02:52.472647  212383 kubeadm.go:319] 
	I1124 14:02:52.472705  212383 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 14:02:52.472709  212383 kubeadm.go:319] 
	I1124 14:02:52.472759  212383 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 14:02:52.472763  212383 kubeadm.go:319] 
	I1124 14:02:52.472819  212383 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 14:02:52.472899  212383 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 14:02:52.472973  212383 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 14:02:52.472976  212383 kubeadm.go:319] 
	I1124 14:02:52.473067  212383 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 14:02:52.473150  212383 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 14:02:52.473154  212383 kubeadm.go:319] 
	I1124 14:02:52.473251  212383 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token vzq4ay.serxkml6gk1378wv \
	I1124 14:02:52.473364  212383 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aa948289582a95f47bab77808ca51e5d74f41a914fe1740ab9448815f8011aac \
	I1124 14:02:52.473385  212383 kubeadm.go:319] 	--control-plane 
	I1124 14:02:52.473389  212383 kubeadm.go:319] 
	I1124 14:02:52.473481  212383 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 14:02:52.473484  212383 kubeadm.go:319] 
	I1124 14:02:52.473573  212383 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token vzq4ay.serxkml6gk1378wv \
	I1124 14:02:52.473696  212383 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aa948289582a95f47bab77808ca51e5d74f41a914fe1740ab9448815f8011aac 
	I1124 14:02:52.473705  212383 cni.go:84] Creating CNI manager for ""
	I1124 14:02:52.473711  212383 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 14:02:52.476852  212383 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 14:02:52.479922  212383 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 14:02:52.489605  212383 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 14:02:52.489623  212383 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 14:02:52.536790  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 14:02:53.413438  212383 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 14:02:53.413571  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:53.413654  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-609438 minikube.k8s.io/updated_at=2025_11_24T14_02_53_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=default-k8s-diff-port-609438 minikube.k8s.io/primary=true
	I1124 14:02:53.507283  212383 ops.go:34] apiserver oom_adj: -16
	I1124 14:02:53.863033  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:50.808351  213570 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002003298s
	I1124 14:02:50.815187  213570 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 14:02:50.815743  213570 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1124 14:02:50.816608  213570 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 14:02:50.818559  213570 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 14:02:54.363074  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:54.863777  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:55.363086  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:55.863114  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:56.363110  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:56.863441  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:57.057097  212383 kubeadm.go:1114] duration metric: took 3.643574546s to wait for elevateKubeSystemPrivileges
	I1124 14:02:57.057124  212383 kubeadm.go:403] duration metric: took 26.886093324s to StartCluster
	I1124 14:02:57.057141  212383 settings.go:142] acquiring lock: {Name:mk2b0bbff4d8ced468f457362668d43b813dc062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:57.057204  212383 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-2368/kubeconfig
	I1124 14:02:57.057903  212383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/kubeconfig: {Name:mk246d21eaffbd8aca2abdc1b2f89d6fcc902f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:57.058100  212383 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 14:02:57.058223  212383 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 14:02:57.058472  212383 config.go:182] Loaded profile config "default-k8s-diff-port-609438": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 14:02:57.058507  212383 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:02:57.058563  212383 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-609438"
	I1124 14:02:57.058577  212383 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-609438"
	I1124 14:02:57.058598  212383 host.go:66] Checking if "default-k8s-diff-port-609438" exists ...
	I1124 14:02:57.059105  212383 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-609438 --format={{.State.Status}}
	I1124 14:02:57.059672  212383 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-609438"
	I1124 14:02:57.059698  212383 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-609438"
	I1124 14:02:57.060034  212383 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-609438 --format={{.State.Status}}
	I1124 14:02:57.062295  212383 out.go:179] * Verifying Kubernetes components...
	I1124 14:02:57.067608  212383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:02:57.096470  212383 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 14:02:57.100431  212383 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:02:57.100453  212383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:02:57.100520  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:57.108007  212383 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-609438"
	I1124 14:02:57.108047  212383 host.go:66] Checking if "default-k8s-diff-port-609438" exists ...
	I1124 14:02:57.108469  212383 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-609438 --format={{.State.Status}}
	I1124 14:02:57.150290  212383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa Username:docker}
	I1124 14:02:57.151191  212383 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 14:02:57.151207  212383 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 14:02:57.151270  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:57.180229  212383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa Username:docker}
	I1124 14:02:57.835181  212383 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:02:57.835375  212383 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 14:02:57.843296  212383 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 14:02:58.048720  212383 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:02:55.577519  213570 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.75919955s
	I1124 14:02:57.488695  213570 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.669631688s
	I1124 14:02:59.319576  213570 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.503330978s
	I1124 14:02:59.347736  213570 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 14:02:59.365960  213570 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 14:02:59.389045  213570 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 14:02:59.389257  213570 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-593634 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 14:02:59.404075  213570 kubeadm.go:319] [bootstrap-token] Using token: sdluey.txxijid8fmo5jyau
	I1124 14:02:59.018640  212383 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.183422592s)
	I1124 14:02:59.019392  212383 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-609438" to be "Ready" ...
	I1124 14:02:59.019719  212383 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.176349884s)
	I1124 14:02:59.020165  212383 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.184766141s)
	I1124 14:02:59.020204  212383 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 14:02:59.505284  212383 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.456466205s)
	I1124 14:02:59.508376  212383 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1124 14:02:59.407186  213570 out.go:252]   - Configuring RBAC rules ...
	I1124 14:02:59.407326  213570 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 14:02:59.413876  213570 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 14:02:59.424114  213570 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 14:02:59.429247  213570 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 14:02:59.435888  213570 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 14:02:59.441214  213570 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 14:02:59.729166  213570 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 14:03:00.281783  213570 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 14:03:00.726578  213570 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 14:03:00.731583  213570 kubeadm.go:319] 
	I1124 14:03:00.731683  213570 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 14:03:00.731705  213570 kubeadm.go:319] 
	I1124 14:03:00.731783  213570 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 14:03:00.731791  213570 kubeadm.go:319] 
	I1124 14:03:00.731817  213570 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 14:03:00.731879  213570 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 14:03:00.731955  213570 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 14:03:00.731964  213570 kubeadm.go:319] 
	I1124 14:03:00.732019  213570 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 14:03:00.732029  213570 kubeadm.go:319] 
	I1124 14:03:00.732077  213570 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 14:03:00.732085  213570 kubeadm.go:319] 
	I1124 14:03:00.732143  213570 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 14:03:00.732222  213570 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 14:03:00.732296  213570 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 14:03:00.732305  213570 kubeadm.go:319] 
	I1124 14:03:00.732391  213570 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 14:03:00.732470  213570 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 14:03:00.732477  213570 kubeadm.go:319] 
	I1124 14:03:00.732562  213570 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token sdluey.txxijid8fmo5jyau \
	I1124 14:03:00.732674  213570 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aa948289582a95f47bab77808ca51e5d74f41a914fe1740ab9448815f8011aac \
	I1124 14:03:00.732700  213570 kubeadm.go:319] 	--control-plane 
	I1124 14:03:00.732708  213570 kubeadm.go:319] 
	I1124 14:03:00.732793  213570 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 14:03:00.732801  213570 kubeadm.go:319] 
	I1124 14:03:00.732883  213570 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token sdluey.txxijid8fmo5jyau \
	I1124 14:03:00.732989  213570 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aa948289582a95f47bab77808ca51e5d74f41a914fe1740ab9448815f8011aac 
	I1124 14:03:00.734466  213570 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 14:03:00.734704  213570 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 14:03:00.734818  213570 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 14:03:00.734840  213570 cni.go:84] Creating CNI manager for ""
	I1124 14:03:00.734847  213570 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 14:03:00.738356  213570 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 14:02:59.511261  212383 addons.go:530] duration metric: took 2.452743621s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1124 14:02:59.527883  212383 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-609438" context rescaled to 1 replicas
	W1124 14:03:01.022799  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:03.522484  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	I1124 14:03:00.741285  213570 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 14:03:00.747200  213570 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 14:03:00.747222  213570 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 14:03:00.762942  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 14:03:01.083756  213570 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 14:03:01.083943  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:01.084029  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-593634 minikube.k8s.io/updated_at=2025_11_24T14_03_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=embed-certs-593634 minikube.k8s.io/primary=true
	I1124 14:03:01.235259  213570 ops.go:34] apiserver oom_adj: -16
	I1124 14:03:01.235388  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:01.736213  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:02.235575  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:02.735531  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:03.235547  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:03.735985  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:04.235605  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:04.735509  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:05.235491  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:05.735597  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:05.862499  213570 kubeadm.go:1114] duration metric: took 4.778639859s to wait for elevateKubeSystemPrivileges
	I1124 14:03:05.862539  213570 kubeadm.go:403] duration metric: took 24.734468729s to StartCluster
	I1124 14:03:05.862559  213570 settings.go:142] acquiring lock: {Name:mk2b0bbff4d8ced468f457362668d43b813dc062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:03:05.862641  213570 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-2368/kubeconfig
	I1124 14:03:05.864034  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/kubeconfig: {Name:mk246d21eaffbd8aca2abdc1b2f89d6fcc902f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:03:05.864291  213570 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 14:03:05.864292  213570 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 14:03:05.864627  213570 config.go:182] Loaded profile config "embed-certs-593634": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 14:03:05.864675  213570 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:03:05.864760  213570 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-593634"
	I1124 14:03:05.864775  213570 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-593634"
	I1124 14:03:05.864814  213570 host.go:66] Checking if "embed-certs-593634" exists ...
	I1124 14:03:05.865448  213570 cli_runner.go:164] Run: docker container inspect embed-certs-593634 --format={{.State.Status}}
	I1124 14:03:05.865928  213570 addons.go:70] Setting default-storageclass=true in profile "embed-certs-593634"
	I1124 14:03:05.865962  213570 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-593634"
	I1124 14:03:05.866329  213570 cli_runner.go:164] Run: docker container inspect embed-certs-593634 --format={{.State.Status}}
	I1124 14:03:05.867882  213570 out.go:179] * Verifying Kubernetes components...
	I1124 14:03:05.871678  213570 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:03:05.918376  213570 addons.go:239] Setting addon default-storageclass=true in "embed-certs-593634"
	I1124 14:03:05.918427  213570 host.go:66] Checking if "embed-certs-593634" exists ...
	I1124 14:03:05.919006  213570 cli_runner.go:164] Run: docker container inspect embed-certs-593634 --format={{.State.Status}}
	I1124 14:03:05.928779  213570 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 14:03:05.931678  213570 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:03:05.931712  213570 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:03:05.931788  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:03:05.962335  213570 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 14:03:05.962376  213570 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 14:03:05.962476  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:03:05.993403  213570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa Username:docker}
	I1124 14:03:06.003508  213570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa Username:docker}
	I1124 14:03:06.391385  213570 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 14:03:06.391488  213570 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:03:06.435021  213570 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 14:03:06.439159  213570 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:03:06.771396  213570 node_ready.go:35] waiting up to 6m0s for node "embed-certs-593634" to be "Ready" ...
	I1124 14:03:06.771837  213570 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1124 14:03:07.089005  213570 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1124 14:03:06.022254  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:08.023381  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	I1124 14:03:07.091942  213570 addons.go:530] duration metric: took 1.22725676s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1124 14:03:07.275615  213570 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-593634" context rescaled to 1 replicas
	W1124 14:03:08.774304  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:10.522868  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:12.525848  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:10.776272  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:13.274310  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:15.274775  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:14.526016  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:17.023060  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:17.774691  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:20.274332  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:19.523467  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:21.524121  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:23.524697  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:22.774276  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:24.775051  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:26.022538  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:28.023018  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:27.274791  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:29.275073  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:30.030420  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:32.524753  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:31.774872  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:34.274493  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:35.023155  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:37.025173  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:36.275275  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:38.774804  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	I1124 14:03:39.023101  212383 node_ready.go:49] node "default-k8s-diff-port-609438" is "Ready"
	I1124 14:03:39.023134  212383 node_ready.go:38] duration metric: took 40.003724122s for node "default-k8s-diff-port-609438" to be "Ready" ...
	I1124 14:03:39.023149  212383 api_server.go:52] waiting for apiserver process to appear ...
	I1124 14:03:39.023211  212383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:03:39.035892  212383 api_server.go:72] duration metric: took 41.977763431s to wait for apiserver process to appear ...
	I1124 14:03:39.035957  212383 api_server.go:88] waiting for apiserver healthz status ...
	I1124 14:03:39.035992  212383 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1124 14:03:39.045601  212383 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1124 14:03:39.046766  212383 api_server.go:141] control plane version: v1.34.1
	I1124 14:03:39.046790  212383 api_server.go:131] duration metric: took 10.8162ms to wait for apiserver health ...
	I1124 14:03:39.046799  212383 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 14:03:39.057366  212383 system_pods.go:59] 8 kube-system pods found
	I1124 14:03:39.057464  212383 system_pods.go:61] "coredns-66bc5c9577-qctbs" [cd79b5ea-c0d8-4b5c-94a4-6743b53cd9de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:03:39.057486  212383 system_pods.go:61] "etcd-default-k8s-diff-port-609438" [3e2d5715-12d7-441e-9747-edb4c6f78577] Running
	I1124 14:03:39.057527  212383 system_pods.go:61] "kindnet-jcqb9" [92836c58-7b28-4b1b-838d-9491cd23823b] Running
	I1124 14:03:39.057552  212383 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-609438" [b6e69d70-9c7f-4b06-8ba8-a37c17d79bb9] Running
	I1124 14:03:39.057573  212383 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-609438" [a1dba2ac-ba3c-4282-966e-c7abffbb6b9a] Running
	I1124 14:03:39.057612  212383 system_pods.go:61] "kube-proxy-frlpg" [814cc9f1-7449-431c-a35d-3ac3b4d05db9] Running
	I1124 14:03:39.057637  212383 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-609438" [a87b6471-6253-4b9c-abd1-83d029df6343] Running
	I1124 14:03:39.057664  212383 system_pods.go:61] "storage-provisioner" [98d7eb97-3a94-4904-9af3-f063689cec40] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:03:39.057702  212383 system_pods.go:74] duration metric: took 10.895381ms to wait for pod list to return data ...
	I1124 14:03:39.057729  212383 default_sa.go:34] waiting for default service account to be created ...
	I1124 14:03:39.068310  212383 default_sa.go:45] found service account: "default"
	I1124 14:03:39.068335  212383 default_sa.go:55] duration metric: took 10.585051ms for default service account to be created ...
	I1124 14:03:39.068346  212383 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 14:03:39.072487  212383 system_pods.go:86] 8 kube-system pods found
	I1124 14:03:39.072578  212383 system_pods.go:89] "coredns-66bc5c9577-qctbs" [cd79b5ea-c0d8-4b5c-94a4-6743b53cd9de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:03:39.072601  212383 system_pods.go:89] "etcd-default-k8s-diff-port-609438" [3e2d5715-12d7-441e-9747-edb4c6f78577] Running
	I1124 14:03:39.072648  212383 system_pods.go:89] "kindnet-jcqb9" [92836c58-7b28-4b1b-838d-9491cd23823b] Running
	I1124 14:03:39.072673  212383 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-609438" [b6e69d70-9c7f-4b06-8ba8-a37c17d79bb9] Running
	I1124 14:03:39.072696  212383 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-609438" [a1dba2ac-ba3c-4282-966e-c7abffbb6b9a] Running
	I1124 14:03:39.072735  212383 system_pods.go:89] "kube-proxy-frlpg" [814cc9f1-7449-431c-a35d-3ac3b4d05db9] Running
	I1124 14:03:39.072761  212383 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-609438" [a87b6471-6253-4b9c-abd1-83d029df6343] Running
	I1124 14:03:39.072785  212383 system_pods.go:89] "storage-provisioner" [98d7eb97-3a94-4904-9af3-f063689cec40] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:03:39.072847  212383 retry.go:31] will retry after 264.799989ms: missing components: kube-dns
	I1124 14:03:39.342534  212383 system_pods.go:86] 8 kube-system pods found
	I1124 14:03:39.342686  212383 system_pods.go:89] "coredns-66bc5c9577-qctbs" [cd79b5ea-c0d8-4b5c-94a4-6743b53cd9de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:03:39.342725  212383 system_pods.go:89] "etcd-default-k8s-diff-port-609438" [3e2d5715-12d7-441e-9747-edb4c6f78577] Running
	I1124 14:03:39.342754  212383 system_pods.go:89] "kindnet-jcqb9" [92836c58-7b28-4b1b-838d-9491cd23823b] Running
	I1124 14:03:39.342775  212383 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-609438" [b6e69d70-9c7f-4b06-8ba8-a37c17d79bb9] Running
	I1124 14:03:39.342816  212383 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-609438" [a1dba2ac-ba3c-4282-966e-c7abffbb6b9a] Running
	I1124 14:03:39.342842  212383 system_pods.go:89] "kube-proxy-frlpg" [814cc9f1-7449-431c-a35d-3ac3b4d05db9] Running
	I1124 14:03:39.342864  212383 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-609438" [a87b6471-6253-4b9c-abd1-83d029df6343] Running
	I1124 14:03:39.342912  212383 system_pods.go:89] "storage-provisioner" [98d7eb97-3a94-4904-9af3-f063689cec40] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:03:39.342941  212383 retry.go:31] will retry after 272.670872ms: missing components: kube-dns
	I1124 14:03:39.626215  212383 system_pods.go:86] 8 kube-system pods found
	I1124 14:03:39.626242  212383 system_pods.go:89] "coredns-66bc5c9577-qctbs" [cd79b5ea-c0d8-4b5c-94a4-6743b53cd9de] Running
	I1124 14:03:39.626248  212383 system_pods.go:89] "etcd-default-k8s-diff-port-609438" [3e2d5715-12d7-441e-9747-edb4c6f78577] Running
	I1124 14:03:39.626254  212383 system_pods.go:89] "kindnet-jcqb9" [92836c58-7b28-4b1b-838d-9491cd23823b] Running
	I1124 14:03:39.626258  212383 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-609438" [b6e69d70-9c7f-4b06-8ba8-a37c17d79bb9] Running
	I1124 14:03:39.626271  212383 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-609438" [a1dba2ac-ba3c-4282-966e-c7abffbb6b9a] Running
	I1124 14:03:39.626274  212383 system_pods.go:89] "kube-proxy-frlpg" [814cc9f1-7449-431c-a35d-3ac3b4d05db9] Running
	I1124 14:03:39.626278  212383 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-609438" [a87b6471-6253-4b9c-abd1-83d029df6343] Running
	I1124 14:03:39.626282  212383 system_pods.go:89] "storage-provisioner" [98d7eb97-3a94-4904-9af3-f063689cec40] Running
	I1124 14:03:39.626289  212383 system_pods.go:126] duration metric: took 557.937565ms to wait for k8s-apps to be running ...
	I1124 14:03:39.626297  212383 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 14:03:39.626351  212383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:03:39.649756  212383 system_svc.go:56] duration metric: took 23.432209ms WaitForService to wait for kubelet
	I1124 14:03:39.649833  212383 kubeadm.go:587] duration metric: took 42.591709093s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:03:39.649867  212383 node_conditions.go:102] verifying NodePressure condition ...
	I1124 14:03:39.658388  212383 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 14:03:39.658418  212383 node_conditions.go:123] node cpu capacity is 2
	I1124 14:03:39.658433  212383 node_conditions.go:105] duration metric: took 8.545281ms to run NodePressure ...
	I1124 14:03:39.658445  212383 start.go:242] waiting for startup goroutines ...
	I1124 14:03:39.658453  212383 start.go:247] waiting for cluster config update ...
	I1124 14:03:39.658464  212383 start.go:256] writing updated cluster config ...
	I1124 14:03:39.658759  212383 ssh_runner.go:195] Run: rm -f paused
	I1124 14:03:39.662925  212383 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:03:39.668038  212383 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qctbs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:39.673734  212383 pod_ready.go:94] pod "coredns-66bc5c9577-qctbs" is "Ready"
	I1124 14:03:39.673815  212383 pod_ready.go:86] duration metric: took 5.694049ms for pod "coredns-66bc5c9577-qctbs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:39.676472  212383 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:39.685362  212383 pod_ready.go:94] pod "etcd-default-k8s-diff-port-609438" is "Ready"
	I1124 14:03:39.685439  212383 pod_ready.go:86] duration metric: took 8.894816ms for pod "etcd-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:39.688312  212383 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:39.695577  212383 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-609438" is "Ready"
	I1124 14:03:39.695663  212383 pod_ready.go:86] duration metric: took 7.234136ms for pod "kube-apiserver-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:39.698560  212383 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:40.070303  212383 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-609438" is "Ready"
	I1124 14:03:40.070379  212383 pod_ready.go:86] duration metric: took 371.738474ms for pod "kube-controller-manager-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:40.267521  212383 pod_ready.go:83] waiting for pod "kube-proxy-frlpg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:40.667723  212383 pod_ready.go:94] pod "kube-proxy-frlpg" is "Ready"
	I1124 14:03:40.667753  212383 pod_ready.go:86] duration metric: took 400.161589ms for pod "kube-proxy-frlpg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:40.868901  212383 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:41.268703  212383 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-609438" is "Ready"
	I1124 14:03:41.268732  212383 pod_ready.go:86] duration metric: took 399.797357ms for pod "kube-scheduler-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:41.268746  212383 pod_ready.go:40] duration metric: took 1.605732693s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:03:41.331086  212383 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 14:03:41.336425  212383 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-609438" cluster and "default" namespace by default
	W1124 14:03:41.279143  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:43.774833  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:45.775431  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:48.275442  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	I1124 14:03:48.774783  213570 node_ready.go:49] node "embed-certs-593634" is "Ready"
	I1124 14:03:48.774815  213570 node_ready.go:38] duration metric: took 42.00333297s for node "embed-certs-593634" to be "Ready" ...
	I1124 14:03:48.774830  213570 api_server.go:52] waiting for apiserver process to appear ...
	I1124 14:03:48.774888  213570 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:03:48.787878  213570 api_server.go:72] duration metric: took 42.923556551s to wait for apiserver process to appear ...
	I1124 14:03:48.787947  213570 api_server.go:88] waiting for apiserver healthz status ...
	I1124 14:03:48.787968  213570 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 14:03:48.796278  213570 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1124 14:03:48.797266  213570 api_server.go:141] control plane version: v1.34.1
	I1124 14:03:48.797292  213570 api_server.go:131] duration metric: took 9.336207ms to wait for apiserver health ...
	I1124 14:03:48.797301  213570 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 14:03:48.800410  213570 system_pods.go:59] 8 kube-system pods found
	I1124 14:03:48.800444  213570 system_pods.go:61] "coredns-66bc5c9577-jjgxr" [9d72d6f6-d1ef-4fcd-9e24-be088e8a5e66] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:03:48.800451  213570 system_pods.go:61] "etcd-embed-certs-593634" [1ad343da-778d-475d-a5ce-fc08e11f693c] Running
	I1124 14:03:48.800456  213570 system_pods.go:61] "kindnet-2xhmk" [a716bd95-8847-4c78-a39c-0234825c66fb] Running
	I1124 14:03:48.800460  213570 system_pods.go:61] "kube-apiserver-embed-certs-593634" [2a958fa8-640e-4d6e-80a4-4cb5abb541bf] Running
	I1124 14:03:48.800464  213570 system_pods.go:61] "kube-controller-manager-embed-certs-593634" [5897c242-4f69-4740-bc24-712bc8bdb2f6] Running
	I1124 14:03:48.800468  213570 system_pods.go:61] "kube-proxy-t2c22" [62324907-3da3-4c2c-887d-798d8375da05] Running
	I1124 14:03:48.800472  213570 system_pods.go:61] "kube-scheduler-embed-certs-593634" [b881f394-008a-4da5-87fe-94a9d922e12c] Running
	I1124 14:03:48.800477  213570 system_pods.go:61] "storage-provisioner" [45b3891f-97a3-4dcb-bafa-b1400a3b4480] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:03:48.800489  213570 system_pods.go:74] duration metric: took 3.183028ms to wait for pod list to return data ...
	I1124 14:03:48.800497  213570 default_sa.go:34] waiting for default service account to be created ...
	I1124 14:03:48.803083  213570 default_sa.go:45] found service account: "default"
	I1124 14:03:48.803109  213570 default_sa.go:55] duration metric: took 2.606184ms for default service account to be created ...
	I1124 14:03:48.803119  213570 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 14:03:48.806286  213570 system_pods.go:86] 8 kube-system pods found
	I1124 14:03:48.806321  213570 system_pods.go:89] "coredns-66bc5c9577-jjgxr" [9d72d6f6-d1ef-4fcd-9e24-be088e8a5e66] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:03:48.806328  213570 system_pods.go:89] "etcd-embed-certs-593634" [1ad343da-778d-475d-a5ce-fc08e11f693c] Running
	I1124 14:03:48.806334  213570 system_pods.go:89] "kindnet-2xhmk" [a716bd95-8847-4c78-a39c-0234825c66fb] Running
	I1124 14:03:48.806365  213570 system_pods.go:89] "kube-apiserver-embed-certs-593634" [2a958fa8-640e-4d6e-80a4-4cb5abb541bf] Running
	I1124 14:03:48.806377  213570 system_pods.go:89] "kube-controller-manager-embed-certs-593634" [5897c242-4f69-4740-bc24-712bc8bdb2f6] Running
	I1124 14:03:48.806381  213570 system_pods.go:89] "kube-proxy-t2c22" [62324907-3da3-4c2c-887d-798d8375da05] Running
	I1124 14:03:48.806385  213570 system_pods.go:89] "kube-scheduler-embed-certs-593634" [b881f394-008a-4da5-87fe-94a9d922e12c] Running
	I1124 14:03:48.806395  213570 system_pods.go:89] "storage-provisioner" [45b3891f-97a3-4dcb-bafa-b1400a3b4480] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:03:48.806421  213570 retry.go:31] will retry after 312.175321ms: missing components: kube-dns
	I1124 14:03:49.124170  213570 system_pods.go:86] 8 kube-system pods found
	I1124 14:03:49.124261  213570 system_pods.go:89] "coredns-66bc5c9577-jjgxr" [9d72d6f6-d1ef-4fcd-9e24-be088e8a5e66] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:03:49.124283  213570 system_pods.go:89] "etcd-embed-certs-593634" [1ad343da-778d-475d-a5ce-fc08e11f693c] Running
	I1124 14:03:49.124327  213570 system_pods.go:89] "kindnet-2xhmk" [a716bd95-8847-4c78-a39c-0234825c66fb] Running
	I1124 14:03:49.124354  213570 system_pods.go:89] "kube-apiserver-embed-certs-593634" [2a958fa8-640e-4d6e-80a4-4cb5abb541bf] Running
	I1124 14:03:49.124376  213570 system_pods.go:89] "kube-controller-manager-embed-certs-593634" [5897c242-4f69-4740-bc24-712bc8bdb2f6] Running
	I1124 14:03:49.124412  213570 system_pods.go:89] "kube-proxy-t2c22" [62324907-3da3-4c2c-887d-798d8375da05] Running
	I1124 14:03:49.124439  213570 system_pods.go:89] "kube-scheduler-embed-certs-593634" [b881f394-008a-4da5-87fe-94a9d922e12c] Running
	I1124 14:03:49.124462  213570 system_pods.go:89] "storage-provisioner" [45b3891f-97a3-4dcb-bafa-b1400a3b4480] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:03:49.124508  213570 retry.go:31] will retry after 274.806291ms: missing components: kube-dns
	I1124 14:03:49.404719  213570 system_pods.go:86] 8 kube-system pods found
	I1124 14:03:49.404754  213570 system_pods.go:89] "coredns-66bc5c9577-jjgxr" [9d72d6f6-d1ef-4fcd-9e24-be088e8a5e66] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:03:49.404761  213570 system_pods.go:89] "etcd-embed-certs-593634" [1ad343da-778d-475d-a5ce-fc08e11f693c] Running
	I1124 14:03:49.404768  213570 system_pods.go:89] "kindnet-2xhmk" [a716bd95-8847-4c78-a39c-0234825c66fb] Running
	I1124 14:03:49.404772  213570 system_pods.go:89] "kube-apiserver-embed-certs-593634" [2a958fa8-640e-4d6e-80a4-4cb5abb541bf] Running
	I1124 14:03:49.404776  213570 system_pods.go:89] "kube-controller-manager-embed-certs-593634" [5897c242-4f69-4740-bc24-712bc8bdb2f6] Running
	I1124 14:03:49.404780  213570 system_pods.go:89] "kube-proxy-t2c22" [62324907-3da3-4c2c-887d-798d8375da05] Running
	I1124 14:03:49.404784  213570 system_pods.go:89] "kube-scheduler-embed-certs-593634" [b881f394-008a-4da5-87fe-94a9d922e12c] Running
	I1124 14:03:49.404789  213570 system_pods.go:89] "storage-provisioner" [45b3891f-97a3-4dcb-bafa-b1400a3b4480] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:03:49.404803  213570 retry.go:31] will retry after 483.554421ms: missing components: kube-dns
	I1124 14:03:49.894105  213570 system_pods.go:86] 8 kube-system pods found
	I1124 14:03:49.894135  213570 system_pods.go:89] "coredns-66bc5c9577-jjgxr" [9d72d6f6-d1ef-4fcd-9e24-be088e8a5e66] Running
	I1124 14:03:49.894142  213570 system_pods.go:89] "etcd-embed-certs-593634" [1ad343da-778d-475d-a5ce-fc08e11f693c] Running
	I1124 14:03:49.894146  213570 system_pods.go:89] "kindnet-2xhmk" [a716bd95-8847-4c78-a39c-0234825c66fb] Running
	I1124 14:03:49.894151  213570 system_pods.go:89] "kube-apiserver-embed-certs-593634" [2a958fa8-640e-4d6e-80a4-4cb5abb541bf] Running
	I1124 14:03:49.894156  213570 system_pods.go:89] "kube-controller-manager-embed-certs-593634" [5897c242-4f69-4740-bc24-712bc8bdb2f6] Running
	I1124 14:03:49.894161  213570 system_pods.go:89] "kube-proxy-t2c22" [62324907-3da3-4c2c-887d-798d8375da05] Running
	I1124 14:03:49.894165  213570 system_pods.go:89] "kube-scheduler-embed-certs-593634" [b881f394-008a-4da5-87fe-94a9d922e12c] Running
	I1124 14:03:49.894169  213570 system_pods.go:89] "storage-provisioner" [45b3891f-97a3-4dcb-bafa-b1400a3b4480] Running
	I1124 14:03:49.894178  213570 system_pods.go:126] duration metric: took 1.091052703s to wait for k8s-apps to be running ...
	I1124 14:03:49.894185  213570 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 14:03:49.894238  213570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:03:49.917451  213570 system_svc.go:56] duration metric: took 23.256451ms WaitForService to wait for kubelet
	I1124 14:03:49.917492  213570 kubeadm.go:587] duration metric: took 44.053162457s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:03:49.917516  213570 node_conditions.go:102] verifying NodePressure condition ...
	I1124 14:03:49.923758  213570 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 14:03:49.923792  213570 node_conditions.go:123] node cpu capacity is 2
	I1124 14:03:49.923807  213570 node_conditions.go:105] duration metric: took 6.285283ms to run NodePressure ...
	I1124 14:03:49.923820  213570 start.go:242] waiting for startup goroutines ...
	I1124 14:03:49.923828  213570 start.go:247] waiting for cluster config update ...
	I1124 14:03:49.923839  213570 start.go:256] writing updated cluster config ...
	I1124 14:03:49.924206  213570 ssh_runner.go:195] Run: rm -f paused
	I1124 14:03:49.927626  213570 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:03:49.931893  213570 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jjgxr" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:49.942828  213570 pod_ready.go:94] pod "coredns-66bc5c9577-jjgxr" is "Ready"
	I1124 14:03:49.942856  213570 pod_ready.go:86] duration metric: took 10.828769ms for pod "coredns-66bc5c9577-jjgxr" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:49.945912  213570 pod_ready.go:83] waiting for pod "etcd-embed-certs-593634" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:49.951340  213570 pod_ready.go:94] pod "etcd-embed-certs-593634" is "Ready"
	I1124 14:03:49.951371  213570 pod_ready.go:86] duration metric: took 5.432769ms for pod "etcd-embed-certs-593634" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:49.955119  213570 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-593634" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:49.962767  213570 pod_ready.go:94] pod "kube-apiserver-embed-certs-593634" is "Ready"
	I1124 14:03:49.962795  213570 pod_ready.go:86] duration metric: took 7.64808ms for pod "kube-apiserver-embed-certs-593634" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:49.966857  213570 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-593634" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:50.332804  213570 pod_ready.go:94] pod "kube-controller-manager-embed-certs-593634" is "Ready"
	I1124 14:03:50.332831  213570 pod_ready.go:86] duration metric: took 365.944063ms for pod "kube-controller-manager-embed-certs-593634" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:50.533022  213570 pod_ready.go:83] waiting for pod "kube-proxy-t2c22" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:50.932652  213570 pod_ready.go:94] pod "kube-proxy-t2c22" is "Ready"
	I1124 14:03:50.932687  213570 pod_ready.go:86] duration metric: took 399.640527ms for pod "kube-proxy-t2c22" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:51.133145  213570 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-593634" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:51.532686  213570 pod_ready.go:94] pod "kube-scheduler-embed-certs-593634" is "Ready"
	I1124 14:03:51.532723  213570 pod_ready.go:86] duration metric: took 399.546574ms for pod "kube-scheduler-embed-certs-593634" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:51.532738  213570 pod_ready.go:40] duration metric: took 1.605063201s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:03:51.763100  213570 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 14:03:51.766630  213570 out.go:179] * Done! kubectl is now configured to use "embed-certs-593634" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	f82ca073066cf       1611cd07b61d5       8 seconds ago        Running             busybox                   0                   d2bab1c2ee203       busybox                                      default
	8f3613b1af9f5       138784d87c9c5       14 seconds ago       Running             coredns                   0                   56eb5cfd1d547       coredns-66bc5c9577-jjgxr                     kube-system
	422cf5815a208       ba04bb24b9575       14 seconds ago       Running             storage-provisioner       0                   a4f25514c1964       storage-provisioner                          kube-system
	488f43af45940       05baa95f5142d       54 seconds ago       Running             kube-proxy                0                   b29bd178f0237       kube-proxy-t2c22                             kube-system
	d8d33a8f36018       b1a8c6f707935       55 seconds ago       Running             kindnet-cni               0                   134949bcd76c3       kindnet-2xhmk                                kube-system
	3a60d9be30d61       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   4a20d1b83a9ae       kube-apiserver-embed-certs-593634            kube-system
	16743d0401e11       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   3e0e7d4cacfb7       kube-controller-manager-embed-certs-593634   kube-system
	d86785ce1ba19       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   742bd542951dc       kube-scheduler-embed-certs-593634            kube-system
	ba70ac31cf979       a1894772a478e       About a minute ago   Running             etcd                      0                   66b61302af36b       etcd-embed-certs-593634                      kube-system
	
	
	==> containerd <==
	Nov 24 14:03:49 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:49.178643565Z" level=info msg="CreateContainer within sandbox \"a4f25514c1964a4bad392ff80b25d804ec1e02345ceccbf1862a6d0a1fd8dfd7\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"422cf5815a208c4f42d49c97bb60a4d5a737aa0fd4371ddaf2bbf0da8af91cea\""
	Nov 24 14:03:49 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:49.181690017Z" level=info msg="StartContainer for \"422cf5815a208c4f42d49c97bb60a4d5a737aa0fd4371ddaf2bbf0da8af91cea\""
	Nov 24 14:03:49 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:49.184621292Z" level=info msg="connecting to shim 422cf5815a208c4f42d49c97bb60a4d5a737aa0fd4371ddaf2bbf0da8af91cea" address="unix:///run/containerd/s/6fd512b12bd919a3d55db088ea8349c84e932c10e41065b5f9aa0777efc07cb8" protocol=ttrpc version=3
	Nov 24 14:03:49 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:49.186746157Z" level=info msg="Container 8f3613b1af9f58510f2488f3930203a9b7f874fe0d8361da5a4c7182aeab5ee3: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 14:03:49 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:49.203523107Z" level=info msg="CreateContainer within sandbox \"56eb5cfd1d5479dc5d3b4e73c73fa94c5cf5725e179c449e93ccdf1da24fb69b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8f3613b1af9f58510f2488f3930203a9b7f874fe0d8361da5a4c7182aeab5ee3\""
	Nov 24 14:03:49 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:49.204726264Z" level=info msg="StartContainer for \"8f3613b1af9f58510f2488f3930203a9b7f874fe0d8361da5a4c7182aeab5ee3\""
	Nov 24 14:03:49 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:49.205629757Z" level=info msg="connecting to shim 8f3613b1af9f58510f2488f3930203a9b7f874fe0d8361da5a4c7182aeab5ee3" address="unix:///run/containerd/s/b7185fa15757b5512445d216ac38aca4a90e2a2db11f69c047d62aeda287db85" protocol=ttrpc version=3
	Nov 24 14:03:49 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:49.268726088Z" level=info msg="StartContainer for \"422cf5815a208c4f42d49c97bb60a4d5a737aa0fd4371ddaf2bbf0da8af91cea\" returns successfully"
	Nov 24 14:03:49 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:49.300301905Z" level=info msg="StartContainer for \"8f3613b1af9f58510f2488f3930203a9b7f874fe0d8361da5a4c7182aeab5ee3\" returns successfully"
	Nov 24 14:03:52 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:52.442191903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:f8c75830-451a-4be9-beb5-1131f44fca93,Namespace:default,Attempt:0,}"
	Nov 24 14:03:52 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:52.496274712Z" level=info msg="connecting to shim d2bab1c2ee20342f2bf5b4dd6f7e900c0fc64ebf0d26b010c4b3a6d507b1de6c" address="unix:///run/containerd/s/cae2fa33dbafd3c456ea071f9682925cc0c88b0756f7fd2c1865374e7138124c" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 14:03:52 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:52.616560668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:f8c75830-451a-4be9-beb5-1131f44fca93,Namespace:default,Attempt:0,} returns sandbox id \"d2bab1c2ee20342f2bf5b4dd6f7e900c0fc64ebf0d26b010c4b3a6d507b1de6c\""
	Nov 24 14:03:52 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:52.628856409Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 14:03:54 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:54.841920297Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 14:03:54 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:54.846328406Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937189"
	Nov 24 14:03:54 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:54.846432932Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 14:03:54 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:54.849370402Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 14:03:54 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:54.850083419Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.220987156s"
	Nov 24 14:03:54 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:54.850130411Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 24 14:03:54 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:54.859615952Z" level=info msg="CreateContainer within sandbox \"d2bab1c2ee20342f2bf5b4dd6f7e900c0fc64ebf0d26b010c4b3a6d507b1de6c\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 14:03:54 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:54.875644183Z" level=info msg="Container f82ca073066cf9939535c6279e3af5e38acddcb484054bcce23ad7914656ebc1: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 14:03:54 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:54.888566261Z" level=info msg="CreateContainer within sandbox \"d2bab1c2ee20342f2bf5b4dd6f7e900c0fc64ebf0d26b010c4b3a6d507b1de6c\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"f82ca073066cf9939535c6279e3af5e38acddcb484054bcce23ad7914656ebc1\""
	Nov 24 14:03:54 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:54.891618185Z" level=info msg="StartContainer for \"f82ca073066cf9939535c6279e3af5e38acddcb484054bcce23ad7914656ebc1\""
	Nov 24 14:03:54 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:54.892801141Z" level=info msg="connecting to shim f82ca073066cf9939535c6279e3af5e38acddcb484054bcce23ad7914656ebc1" address="unix:///run/containerd/s/cae2fa33dbafd3c456ea071f9682925cc0c88b0756f7fd2c1865374e7138124c" protocol=ttrpc version=3
	Nov 24 14:03:54 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:54.974262276Z" level=info msg="StartContainer for \"f82ca073066cf9939535c6279e3af5e38acddcb484054bcce23ad7914656ebc1\" returns successfully"
	
	
	==> coredns [8f3613b1af9f58510f2488f3930203a9b7f874fe0d8361da5a4c7182aeab5ee3] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39972 - 47583 "HINFO IN 121077021184602861.149659788633537211. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.024187178s
	
	
	==> describe nodes <==
	Name:               embed-certs-593634
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-593634
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=embed-certs-593634
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T14_03_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 14:02:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-593634
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:04:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:04:02 +0000   Mon, 24 Nov 2025 14:02:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:04:02 +0000   Mon, 24 Nov 2025 14:02:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:04:02 +0000   Mon, 24 Nov 2025 14:02:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 14:04:02 +0000   Mon, 24 Nov 2025 14:03:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-593634
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                e4ef0f99-1a9a-4cde-9064-423d8b90181c
	  Boot ID:                    dd480c26-e101-4930-b98c-54c06b430fdc
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-jjgxr                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     57s
	  kube-system                 etcd-embed-certs-593634                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         64s
	  kube-system                 kindnet-2xhmk                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      58s
	  kube-system                 kube-apiserver-embed-certs-593634             250m (12%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-controller-manager-embed-certs-593634    200m (10%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-proxy-t2c22                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-scheduler-embed-certs-593634             100m (5%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Warning  CgroupV1                 73s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  73s (x8 over 73s)  kubelet          Node embed-certs-593634 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    73s (x8 over 73s)  kubelet          Node embed-certs-593634 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     73s (x7 over 73s)  kubelet          Node embed-certs-593634 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  73s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 63s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  63s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  63s                kubelet          Node embed-certs-593634 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s                kubelet          Node embed-certs-593634 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s                kubelet          Node embed-certs-593634 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           59s                node-controller  Node embed-certs-593634 event: Registered Node embed-certs-593634 in Controller
	  Normal   NodeReady                15s                kubelet          Node embed-certs-593634 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 12:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014697] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497291] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033884] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.804993] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.476130] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [ba70ac31cf979f3847171923cc96cefda27e391c3648e7c5dc513e3347116c24] <==
	{"level":"warn","ts":"2025-11-24T14:02:54.950803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:54.971534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.026006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.083106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.108992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.125922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.145993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.169733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.190603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.231586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.231938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.253261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.292628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.311727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.327796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.347182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.368705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.438447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.455662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.547376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.552263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.568140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.595996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.615225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.737669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38362","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:04:03 up  1:46,  0 user,  load average: 2.57, 3.34, 3.02
	Linux embed-certs-593634 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d8d33a8f36018747afa88137af5d6a8191a723a5d7f8346b8bd229e79e9811be] <==
	I1124 14:03:08.365479       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:03:08.366449       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 14:03:08.366825       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:03:08.366897       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:03:08.366914       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:03:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:03:08.569422       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:03:08.569508       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:03:08.569540       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:03:08.571989       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 14:03:38.569149       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 14:03:38.571353       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 14:03:38.572368       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 14:03:38.572375       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1124 14:03:40.070308       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 14:03:40.070421       1 metrics.go:72] Registering metrics
	I1124 14:03:40.070581       1 controller.go:711] "Syncing nftables rules"
	I1124 14:03:48.575778       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 14:03:48.575981       1 main.go:301] handling current node
	I1124 14:03:58.569104       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 14:03:58.569141       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3a60d9be30d6103a95f401caf2bb929b5c49ebfce9a7b132430f55718822e815] <==
	I1124 14:02:57.073133       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 14:02:57.073335       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 14:02:57.091800       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 14:02:57.107762       1 controller.go:667] quota admission added evaluator for: namespaces
	E1124 14:02:57.200702       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E1124 14:02:57.200770       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1124 14:02:57.411896       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 14:02:57.773921       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 14:02:57.794161       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 14:02:57.794193       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:02:58.858899       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 14:02:58.963205       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 14:02:59.174464       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 14:02:59.205211       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1124 14:02:59.206560       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 14:02:59.212549       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:02:59.903589       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 14:03:00.175597       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 14:03:00.248650       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 14:03:00.422202       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 14:03:05.598021       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 14:03:05.958977       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 14:03:06.109830       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:03:06.148704       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1124 14:04:02.256587       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:37348: use of closed network connection
	
	
	==> kube-controller-manager [16743d0401e1150054e0ee1e6961814398310e73894c86d0327344c25bf7d7b8] <==
	I1124 14:03:04.943768       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 14:03:04.945223       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:03:04.946513       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 14:03:04.946645       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 14:03:04.947760       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 14:03:04.947887       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 14:03:04.949254       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 14:03:04.949649       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 14:03:04.952748       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 14:03:04.954662       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 14:03:04.957152       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 14:03:04.961745       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 14:03:04.972013       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 14:03:04.981489       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 14:03:04.991118       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 14:03:04.991248       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 14:03:04.991550       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 14:03:04.993610       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 14:03:04.993742       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 14:03:04.993864       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 14:03:04.995277       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 14:03:04.995391       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 14:03:04.995487       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 14:03:04.997279       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 14:03:49.947871       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [488f43af45940b994171e0ae482dcd33c6d809a0fc0db195d899b287b06a5941] <==
	I1124 14:03:08.502899       1 server_linux.go:53] "Using iptables proxy"
	I1124 14:03:08.619457       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 14:03:08.720016       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 14:03:08.720258       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1124 14:03:08.720406       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 14:03:08.745047       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:03:08.745108       1 server_linux.go:132] "Using iptables Proxier"
	I1124 14:03:08.750446       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 14:03:08.750957       1 server.go:527] "Version info" version="v1.34.1"
	I1124 14:03:08.750980       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:03:08.752867       1 config.go:200] "Starting service config controller"
	I1124 14:03:08.752900       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 14:03:08.752924       1 config.go:106] "Starting endpoint slice config controller"
	I1124 14:03:08.752929       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 14:03:08.752954       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 14:03:08.753237       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 14:03:08.757937       1 config.go:309] "Starting node config controller"
	I1124 14:03:08.758160       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 14:03:08.758239       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 14:03:08.853992       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 14:03:08.854011       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 14:03:08.854049       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d86785ce1ba191be091bc75c25b6729a402901526d6d2888340f1cd1d00aa1fb] <==
	I1124 14:02:57.423648       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 14:02:57.423904       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1124 14:02:57.473856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 14:02:57.475095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1124 14:02:57.475230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 14:02:57.475291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 14:02:57.484569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 14:02:57.484898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 14:02:57.485038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 14:02:57.485110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 14:02:57.485159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 14:02:57.485237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 14:02:57.485305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 14:02:57.485364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 14:02:57.485401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 14:02:57.485439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 14:02:57.485479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 14:02:57.485524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 14:02:57.485633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 14:02:57.486875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 14:02:57.486976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 14:02:58.328608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1124 14:02:58.421849       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 14:02:58.450372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1124 14:03:00.622867       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 14:03:06 embed-certs-593634 kubelet[1458]: I1124 14:03:06.184089    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62324907-3da3-4c2c-887d-798d8375da05-lib-modules\") pod \"kube-proxy-t2c22\" (UID: \"62324907-3da3-4c2c-887d-798d8375da05\") " pod="kube-system/kube-proxy-t2c22"
	Nov 24 14:03:06 embed-certs-593634 kubelet[1458]: I1124 14:03:06.184150    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhtqp\" (UniqueName: \"kubernetes.io/projected/62324907-3da3-4c2c-887d-798d8375da05-kube-api-access-bhtqp\") pod \"kube-proxy-t2c22\" (UID: \"62324907-3da3-4c2c-887d-798d8375da05\") " pod="kube-system/kube-proxy-t2c22"
	Nov 24 14:03:06 embed-certs-593634 kubelet[1458]: I1124 14:03:06.184171    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a716bd95-8847-4c78-a39c-0234825c66fb-xtables-lock\") pod \"kindnet-2xhmk\" (UID: \"a716bd95-8847-4c78-a39c-0234825c66fb\") " pod="kube-system/kindnet-2xhmk"
	Nov 24 14:03:06 embed-certs-593634 kubelet[1458]: I1124 14:03:06.184214    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw657\" (UniqueName: \"kubernetes.io/projected/a716bd95-8847-4c78-a39c-0234825c66fb-kube-api-access-kw657\") pod \"kindnet-2xhmk\" (UID: \"a716bd95-8847-4c78-a39c-0234825c66fb\") " pod="kube-system/kindnet-2xhmk"
	Nov 24 14:03:06 embed-certs-593634 kubelet[1458]: I1124 14:03:06.184234    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62324907-3da3-4c2c-887d-798d8375da05-xtables-lock\") pod \"kube-proxy-t2c22\" (UID: \"62324907-3da3-4c2c-887d-798d8375da05\") " pod="kube-system/kube-proxy-t2c22"
	Nov 24 14:03:06 embed-certs-593634 kubelet[1458]: I1124 14:03:06.184253    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a716bd95-8847-4c78-a39c-0234825c66fb-lib-modules\") pod \"kindnet-2xhmk\" (UID: \"a716bd95-8847-4c78-a39c-0234825c66fb\") " pod="kube-system/kindnet-2xhmk"
	Nov 24 14:03:06 embed-certs-593634 kubelet[1458]: I1124 14:03:06.184273    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/62324907-3da3-4c2c-887d-798d8375da05-kube-proxy\") pod \"kube-proxy-t2c22\" (UID: \"62324907-3da3-4c2c-887d-798d8375da05\") " pod="kube-system/kube-proxy-t2c22"
	Nov 24 14:03:06 embed-certs-593634 kubelet[1458]: I1124 14:03:06.184290    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a716bd95-8847-4c78-a39c-0234825c66fb-cni-cfg\") pod \"kindnet-2xhmk\" (UID: \"a716bd95-8847-4c78-a39c-0234825c66fb\") " pod="kube-system/kindnet-2xhmk"
	Nov 24 14:03:07 embed-certs-593634 kubelet[1458]: E1124 14:03:07.319345    1458 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 24 14:03:07 embed-certs-593634 kubelet[1458]: E1124 14:03:07.319394    1458 projected.go:196] Error preparing data for projected volume kube-api-access-kw657 for pod kube-system/kindnet-2xhmk: failed to sync configmap cache: timed out waiting for the condition
	Nov 24 14:03:07 embed-certs-593634 kubelet[1458]: E1124 14:03:07.319494    1458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a716bd95-8847-4c78-a39c-0234825c66fb-kube-api-access-kw657 podName:a716bd95-8847-4c78-a39c-0234825c66fb nodeName:}" failed. No retries permitted until 2025-11-24 14:03:07.819467027 +0000 UTC m=+7.824199089 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kw657" (UniqueName: "kubernetes.io/projected/a716bd95-8847-4c78-a39c-0234825c66fb-kube-api-access-kw657") pod "kindnet-2xhmk" (UID: "a716bd95-8847-4c78-a39c-0234825c66fb") : failed to sync configmap cache: timed out waiting for the condition
	Nov 24 14:03:07 embed-certs-593634 kubelet[1458]: E1124 14:03:07.335416    1458 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 24 14:03:07 embed-certs-593634 kubelet[1458]: E1124 14:03:07.335468    1458 projected.go:196] Error preparing data for projected volume kube-api-access-bhtqp for pod kube-system/kube-proxy-t2c22: failed to sync configmap cache: timed out waiting for the condition
	Nov 24 14:03:07 embed-certs-593634 kubelet[1458]: E1124 14:03:07.335549    1458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62324907-3da3-4c2c-887d-798d8375da05-kube-api-access-bhtqp podName:62324907-3da3-4c2c-887d-798d8375da05 nodeName:}" failed. No retries permitted until 2025-11-24 14:03:07.835529037 +0000 UTC m=+7.840261115 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bhtqp" (UniqueName: "kubernetes.io/projected/62324907-3da3-4c2c-887d-798d8375da05-kube-api-access-bhtqp") pod "kube-proxy-t2c22" (UID: "62324907-3da3-4c2c-887d-798d8375da05") : failed to sync configmap cache: timed out waiting for the condition
	Nov 24 14:03:07 embed-certs-593634 kubelet[1458]: I1124 14:03:07.898918    1458 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 24 14:03:08 embed-certs-593634 kubelet[1458]: I1124 14:03:08.602912    1458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t2c22" podStartSLOduration=3.602894075 podStartE2EDuration="3.602894075s" podCreationTimestamp="2025-11-24 14:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:03:08.587884518 +0000 UTC m=+8.592616588" watchObservedRunningTime="2025-11-24 14:03:08.602894075 +0000 UTC m=+8.607626137"
	Nov 24 14:03:08 embed-certs-593634 kubelet[1458]: I1124 14:03:08.603557    1458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-2xhmk" podStartSLOduration=3.6035446589999998 podStartE2EDuration="3.603544659s" podCreationTimestamp="2025-11-24 14:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:03:08.602680009 +0000 UTC m=+8.607412079" watchObservedRunningTime="2025-11-24 14:03:08.603544659 +0000 UTC m=+8.608276729"
	Nov 24 14:03:48 embed-certs-593634 kubelet[1458]: I1124 14:03:48.652494    1458 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 14:03:48 embed-certs-593634 kubelet[1458]: I1124 14:03:48.781306    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/45b3891f-97a3-4dcb-bafa-b1400a3b4480-tmp\") pod \"storage-provisioner\" (UID: \"45b3891f-97a3-4dcb-bafa-b1400a3b4480\") " pod="kube-system/storage-provisioner"
	Nov 24 14:03:48 embed-certs-593634 kubelet[1458]: I1124 14:03:48.781355    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzlqp\" (UniqueName: \"kubernetes.io/projected/45b3891f-97a3-4dcb-bafa-b1400a3b4480-kube-api-access-fzlqp\") pod \"storage-provisioner\" (UID: \"45b3891f-97a3-4dcb-bafa-b1400a3b4480\") " pod="kube-system/storage-provisioner"
	Nov 24 14:03:48 embed-certs-593634 kubelet[1458]: I1124 14:03:48.882624    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d72d6f6-d1ef-4fcd-9e24-be088e8a5e66-config-volume\") pod \"coredns-66bc5c9577-jjgxr\" (UID: \"9d72d6f6-d1ef-4fcd-9e24-be088e8a5e66\") " pod="kube-system/coredns-66bc5c9577-jjgxr"
	Nov 24 14:03:48 embed-certs-593634 kubelet[1458]: I1124 14:03:48.882851    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zcgd\" (UniqueName: \"kubernetes.io/projected/9d72d6f6-d1ef-4fcd-9e24-be088e8a5e66-kube-api-access-4zcgd\") pod \"coredns-66bc5c9577-jjgxr\" (UID: \"9d72d6f6-d1ef-4fcd-9e24-be088e8a5e66\") " pod="kube-system/coredns-66bc5c9577-jjgxr"
	Nov 24 14:03:49 embed-certs-593634 kubelet[1458]: I1124 14:03:49.732052    1458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jjgxr" podStartSLOduration=43.732031029 podStartE2EDuration="43.732031029s" podCreationTimestamp="2025-11-24 14:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:03:49.702473139 +0000 UTC m=+49.707205217" watchObservedRunningTime="2025-11-24 14:03:49.732031029 +0000 UTC m=+49.736763091"
	Nov 24 14:03:49 embed-certs-593634 kubelet[1458]: I1124 14:03:49.761125    1458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.761104572 podStartE2EDuration="42.761104572s" podCreationTimestamp="2025-11-24 14:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:03:49.733124112 +0000 UTC m=+49.737856182" watchObservedRunningTime="2025-11-24 14:03:49.761104572 +0000 UTC m=+49.765836643"
	Nov 24 14:03:52 embed-certs-593634 kubelet[1458]: I1124 14:03:52.315700    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxwvm\" (UniqueName: \"kubernetes.io/projected/f8c75830-451a-4be9-beb5-1131f44fca93-kube-api-access-fxwvm\") pod \"busybox\" (UID: \"f8c75830-451a-4be9-beb5-1131f44fca93\") " pod="default/busybox"
	
	
	==> storage-provisioner [422cf5815a208c4f42d49c97bb60a4d5a737aa0fd4371ddaf2bbf0da8af91cea] <==
	I1124 14:03:49.278304       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 14:03:49.296218       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 14:03:49.296395       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 14:03:49.311480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:49.321306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:03:49.321631       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 14:03:49.324019       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-593634_393582fa-c8fe-4cfe-bbf4-56facd09b640!
	I1124 14:03:49.324344       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"30a9fdad-3f98-4373-8429-132f81eb40fd", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-593634_393582fa-c8fe-4cfe-bbf4-56facd09b640 became leader
	W1124 14:03:49.339974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:49.366739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:03:49.424489       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-593634_393582fa-c8fe-4cfe-bbf4-56facd09b640!
	W1124 14:03:51.371354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:51.377432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:53.380386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:53.385451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:55.389054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:55.394702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:57.398135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:57.403152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:59.406676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:59.411593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:04:01.415364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:04:01.420521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:04:03.430407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:04:03.439832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-593634 -n embed-certs-593634
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-593634 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-593634
helpers_test.go:243: (dbg) docker inspect embed-certs-593634:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6e5dee24e5b06ef4edf06922c23343665480a4e085114101cb004988b20b9fef",
	        "Created": "2025-11-24T14:02:31.673833431Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 214780,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T14:02:31.753778558Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/6e5dee24e5b06ef4edf06922c23343665480a4e085114101cb004988b20b9fef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6e5dee24e5b06ef4edf06922c23343665480a4e085114101cb004988b20b9fef/hostname",
	        "HostsPath": "/var/lib/docker/containers/6e5dee24e5b06ef4edf06922c23343665480a4e085114101cb004988b20b9fef/hosts",
	        "LogPath": "/var/lib/docker/containers/6e5dee24e5b06ef4edf06922c23343665480a4e085114101cb004988b20b9fef/6e5dee24e5b06ef4edf06922c23343665480a4e085114101cb004988b20b9fef-json.log",
	        "Name": "/embed-certs-593634",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-593634:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-593634",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6e5dee24e5b06ef4edf06922c23343665480a4e085114101cb004988b20b9fef",
	                "LowerDir": "/var/lib/docker/overlay2/bfe836a413401a99276c285b2ed8bd202617ff61d94db99f7c8efa134ddc9592-init/diff:/var/lib/docker/overlay2/f206897dad0d7c6b66379aa7c75402ab98ba158a4fc5aedf84eda3d57da10430/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bfe836a413401a99276c285b2ed8bd202617ff61d94db99f7c8efa134ddc9592/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bfe836a413401a99276c285b2ed8bd202617ff61d94db99f7c8efa134ddc9592/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bfe836a413401a99276c285b2ed8bd202617ff61d94db99f7c8efa134ddc9592/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-593634",
	                "Source": "/var/lib/docker/volumes/embed-certs-593634/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-593634",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-593634",
	                "name.minikube.sigs.k8s.io": "embed-certs-593634",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6bd72f29cb118f69e9142e2d1382fba48b6f55fa7d86bdbdd835204321e3acca",
	            "SandboxKey": "/var/run/docker/netns/6bd72f29cb11",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-593634": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:4e:81:77:c4:1f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "26b8c3d8c63cdc00f4fee3f97bf6b2a945c3da49721adc903f246a874d6a2dc0",
	                    "EndpointID": "1cc13b824ec609250d80920f5396576e884feee9a58c7bf52b4aaae6c9212945",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-593634",
	                        "6e5dee24e5b0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-593634 -n embed-certs-593634
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-593634 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-593634 logs -n 25: (1.198480057s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ start   │ -p force-systemd-env-134839 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-134839     │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p kubernetes-upgrade-758885                                                                                                                                                                                                                        │ kubernetes-upgrade-758885    │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p cert-expiration-865605 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-865605       │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:59 UTC │
	│ ssh     │ force-systemd-env-134839 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-134839     │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p force-systemd-env-134839                                                                                                                                                                                                                         │ force-systemd-env-134839     │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ start   │ -p cert-options-440754 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-440754          │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ ssh     │ cert-options-440754 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-440754          │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ ssh     │ -p cert-options-440754 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-440754          │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p cert-options-440754                                                                                                                                                                                                                              │ cert-options-440754          │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ start   │ -p old-k8s-version-318786 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 14:00 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-318786 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │ 24 Nov 25 14:00 UTC │
	│ stop    │ -p old-k8s-version-318786 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │ 24 Nov 25 14:01 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-318786 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ start   │ -p old-k8s-version-318786 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ image   │ old-k8s-version-318786 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:02 UTC │
	│ pause   │ -p old-k8s-version-318786 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:02 UTC │
	│ start   │ -p cert-expiration-865605 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-865605       │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:02 UTC │
	│ unpause │ -p old-k8s-version-318786 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:02 UTC │
	│ delete  │ -p old-k8s-version-318786                                                                                                                                                                                                                           │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:02 UTC │
	│ delete  │ -p old-k8s-version-318786                                                                                                                                                                                                                           │ old-k8s-version-318786       │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:02 UTC │
	│ start   │ -p default-k8s-diff-port-609438 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-609438 │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:03 UTC │
	│ delete  │ -p cert-expiration-865605                                                                                                                                                                                                                           │ cert-expiration-865605       │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:02 UTC │
	│ start   │ -p embed-certs-593634 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-593634           │ jenkins │ v1.37.0 │ 24 Nov 25 14:02 UTC │ 24 Nov 25 14:03 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-609438 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ default-k8s-diff-port-609438 │ jenkins │ v1.37.0 │ 24 Nov 25 14:03 UTC │ 24 Nov 25 14:03 UTC │
	│ stop    │ -p default-k8s-diff-port-609438 --alsologtostderr -v=3                                                                                                                                                                                              │ default-k8s-diff-port-609438 │ jenkins │ v1.37.0 │ 24 Nov 25 14:03 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:02:25
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:02:25.355768  213570 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:02:25.355897  213570 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:02:25.355929  213570 out.go:374] Setting ErrFile to fd 2...
	I1124 14:02:25.355935  213570 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:02:25.356214  213570 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2368/.minikube/bin
	I1124 14:02:25.356610  213570 out.go:368] Setting JSON to false
	I1124 14:02:25.357458  213570 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6294,"bootTime":1763986651,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1124 14:02:25.357531  213570 start.go:143] virtualization:  
	I1124 14:02:25.363130  213570 out.go:179] * [embed-certs-593634] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 14:02:25.366080  213570 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:02:25.366317  213570 notify.go:221] Checking for updates...
	I1124 14:02:25.371678  213570 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:02:25.374517  213570 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2368/kubeconfig
	I1124 14:02:25.377392  213570 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2368/.minikube
	I1124 14:02:25.380291  213570 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 14:02:25.383233  213570 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:02:25.386803  213570 config.go:182] Loaded profile config "default-k8s-diff-port-609438": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 14:02:25.386988  213570 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:02:25.428466  213570 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 14:02:25.428628  213570 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:02:25.551573  213570 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:55 SystemTime:2025-11-24 14:02:25.537516273 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:02:25.551683  213570 docker.go:319] overlay module found
	I1124 14:02:25.556682  213570 out.go:179] * Using the docker driver based on user configuration
	I1124 14:02:25.559709  213570 start.go:309] selected driver: docker
	I1124 14:02:25.559726  213570 start.go:927] validating driver "docker" against <nil>
	I1124 14:02:25.559738  213570 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:02:25.560805  213570 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:02:25.668193  213570 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-24 14:02:25.655788801 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:02:25.668344  213570 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 14:02:25.668552  213570 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:02:25.671717  213570 out.go:179] * Using Docker driver with root privileges
	I1124 14:02:25.674536  213570 cni.go:84] Creating CNI manager for ""
	I1124 14:02:25.674610  213570 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 14:02:25.674621  213570 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 14:02:25.674693  213570 start.go:353] cluster config:
	{Name:embed-certs-593634 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-593634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:02:25.677759  213570 out.go:179] * Starting "embed-certs-593634" primary control-plane node in "embed-certs-593634" cluster
	I1124 14:02:25.680596  213570 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 14:02:25.683549  213570 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 14:02:25.686518  213570 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 14:02:25.686579  213570 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1124 14:02:25.686594  213570 cache.go:65] Caching tarball of preloaded images
	I1124 14:02:25.686679  213570 preload.go:238] Found /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1124 14:02:25.686689  213570 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1124 14:02:25.686792  213570 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/config.json ...
	I1124 14:02:25.686808  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/config.json: {Name:mkcf0b417a9473ceb4b66956bfa520a43f4ebbeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:25.686945  213570 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 14:02:25.710900  213570 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 14:02:25.710919  213570 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 14:02:25.710933  213570 cache.go:240] Successfully downloaded all kic artifacts
	I1124 14:02:25.710962  213570 start.go:360] acquireMachinesLock for embed-certs-593634: {Name:mk435fa1f228450b1765e3435053e751c40a1834 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:02:25.711053  213570 start.go:364] duration metric: took 77.449µs to acquireMachinesLock for "embed-certs-593634"
	I1124 14:02:25.711077  213570 start.go:93] Provisioning new machine with config: &{Name:embed-certs-593634 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-593634 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 14:02:25.711153  213570 start.go:125] createHost starting for "" (driver="docker")
	I1124 14:02:23.909747  212383 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-609438 --name default-k8s-diff-port-609438 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-609438 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-609438 --network default-k8s-diff-port-609438 --ip 192.168.85.2 --volume default-k8s-diff-port-609438:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 14:02:24.307279  212383 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-609438 --format={{.State.Running}}
	I1124 14:02:24.327311  212383 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-609438 --format={{.State.Status}}
	I1124 14:02:24.369313  212383 cli_runner.go:164] Run: docker exec default-k8s-diff-port-609438 stat /var/lib/dpkg/alternatives/iptables
	I1124 14:02:24.459655  212383 oci.go:144] the created container "default-k8s-diff-port-609438" has a running status.
	I1124 14:02:24.459682  212383 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa...
	I1124 14:02:24.627125  212383 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 14:02:24.888609  212383 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-609438 --format={{.State.Status}}
	I1124 14:02:24.933748  212383 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 14:02:24.933772  212383 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-609438 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 14:02:25.043026  212383 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-609438 --format={{.State.Status}}
	I1124 14:02:25.089321  212383 machine.go:94] provisionDockerMachine start ...
	I1124 14:02:25.089431  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:25.153799  212383 main.go:143] libmachine: Using SSH client type: native
	I1124 14:02:25.154239  212383 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1124 14:02:25.154258  212383 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:02:25.461029  212383 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-609438
	
	I1124 14:02:25.461072  212383 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-609438"
	I1124 14:02:25.461152  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:25.543103  212383 main.go:143] libmachine: Using SSH client type: native
	I1124 14:02:25.543625  212383 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1124 14:02:25.543643  212383 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-609438 && echo "default-k8s-diff-port-609438" | sudo tee /etc/hostname
	I1124 14:02:25.773225  212383 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-609438
	
	I1124 14:02:25.773297  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:25.800013  212383 main.go:143] libmachine: Using SSH client type: native
	I1124 14:02:25.801080  212383 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1124 14:02:25.801108  212383 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-609438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-609438/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-609438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:02:26.006217  212383 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:02:26.006244  212383 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-2368/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-2368/.minikube}
	I1124 14:02:26.006263  212383 ubuntu.go:190] setting up certificates
	I1124 14:02:26.006272  212383 provision.go:84] configureAuth start
	I1124 14:02:26.006350  212383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-609438
	I1124 14:02:26.026909  212383 provision.go:143] copyHostCerts
	I1124 14:02:26.026970  212383 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem, removing ...
	I1124 14:02:26.026980  212383 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem
	I1124 14:02:26.027046  212383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem (1082 bytes)
	I1124 14:02:26.027134  212383 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem, removing ...
	I1124 14:02:26.027140  212383 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem
	I1124 14:02:26.027166  212383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem (1123 bytes)
	I1124 14:02:26.027243  212383 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem, removing ...
	I1124 14:02:26.027248  212383 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem
	I1124 14:02:26.027271  212383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem (1679 bytes)
	I1124 14:02:26.027316  212383 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-609438 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-609438 localhost minikube]
	I1124 14:02:26.479334  212383 provision.go:177] copyRemoteCerts
	I1124 14:02:26.479453  212383 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:02:26.479529  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:26.509970  212383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa Username:docker}
	I1124 14:02:26.633721  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 14:02:26.665930  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 14:02:26.697677  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1124 14:02:26.732905  212383 provision.go:87] duration metric: took 726.609261ms to configureAuth
	I1124 14:02:26.732938  212383 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:02:26.733137  212383 config.go:182] Loaded profile config "default-k8s-diff-port-609438": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 14:02:26.733153  212383 machine.go:97] duration metric: took 1.643811371s to provisionDockerMachine
	I1124 14:02:26.733161  212383 client.go:176] duration metric: took 7.487822203s to LocalClient.Create
	I1124 14:02:26.733175  212383 start.go:167] duration metric: took 7.487885367s to libmachine.API.Create "default-k8s-diff-port-609438"
	I1124 14:02:26.733189  212383 start.go:293] postStartSetup for "default-k8s-diff-port-609438" (driver="docker")
	I1124 14:02:26.733198  212383 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:02:26.733271  212383 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:02:26.733323  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:26.763570  212383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa Username:docker}
	I1124 14:02:26.897119  212383 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:02:26.901182  212383 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:02:26.901211  212383 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:02:26.901223  212383 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2368/.minikube/addons for local assets ...
	I1124 14:02:26.901281  212383 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2368/.minikube/files for local assets ...
	I1124 14:02:26.901360  212383 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem -> 41782.pem in /etc/ssl/certs
	I1124 14:02:26.901463  212383 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:02:26.909763  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem --> /etc/ssl/certs/41782.pem (1708 bytes)
	I1124 14:02:26.930128  212383 start.go:296] duration metric: took 196.924439ms for postStartSetup
	I1124 14:02:26.930508  212383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-609438
	I1124 14:02:26.950744  212383 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/config.json ...
	I1124 14:02:26.951035  212383 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 14:02:26.951091  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:26.973535  212383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa Username:docker}
	I1124 14:02:27.077778  212383 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 14:02:27.083066  212383 start.go:128] duration metric: took 7.841363739s to createHost
	I1124 14:02:27.083089  212383 start.go:83] releasing machines lock for "default-k8s-diff-port-609438", held for 7.84148292s
	I1124 14:02:27.083163  212383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-609438
	I1124 14:02:27.105539  212383 ssh_runner.go:195] Run: cat /version.json
	I1124 14:02:27.105585  212383 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:02:27.105661  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:27.105589  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:27.149461  212383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa Username:docker}
	I1124 14:02:27.157732  212383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa Username:docker}
	I1124 14:02:27.367320  212383 ssh_runner.go:195] Run: systemctl --version
	I1124 14:02:27.374447  212383 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:02:27.380473  212383 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:02:27.380647  212383 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:02:27.413935  212383 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 14:02:27.414007  212383 start.go:496] detecting cgroup driver to use...
	I1124 14:02:27.414056  212383 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 14:02:27.414133  212383 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 14:02:27.430159  212383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 14:02:27.444285  212383 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:02:27.444392  212383 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:02:27.461944  212383 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:02:27.481645  212383 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:02:27.639351  212383 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:02:27.799286  212383 docker.go:234] disabling docker service ...
	I1124 14:02:27.799350  212383 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:02:27.831375  212383 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:02:27.845484  212383 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:02:27.983498  212383 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:02:28.133537  212383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:02:28.150716  212383 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:02:28.166057  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 14:02:28.175128  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 14:02:28.184145  212383 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1124 14:02:28.184265  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1124 14:02:28.192987  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 14:02:28.202626  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 14:02:28.211553  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 14:02:28.220020  212383 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:02:28.228018  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 14:02:28.236891  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 14:02:28.245507  212383 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 14:02:28.254226  212383 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:02:28.262068  212383 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:02:28.269803  212383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:02:28.442896  212383 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 14:02:28.596361  212383 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 14:02:28.596444  212383 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 14:02:28.602936  212383 start.go:564] Will wait 60s for crictl version
	I1124 14:02:28.603014  212383 ssh_runner.go:195] Run: which crictl
	I1124 14:02:28.607012  212383 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 14:02:28.645174  212383 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 14:02:28.645247  212383 ssh_runner.go:195] Run: containerd --version
	I1124 14:02:28.669934  212383 ssh_runner.go:195] Run: containerd --version
	I1124 14:02:28.700929  212383 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1124 14:02:28.704729  212383 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-609438 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:02:28.734893  212383 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 14:02:28.738862  212383 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:02:28.749508  212383 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-609438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-609438 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:02:28.749613  212383 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 14:02:28.749681  212383 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:02:28.782633  212383 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 14:02:28.782654  212383 containerd.go:534] Images already preloaded, skipping extraction
	I1124 14:02:28.782711  212383 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:02:28.839126  212383 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 14:02:28.839147  212383 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:02:28.839155  212383 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 containerd true true} ...
	I1124 14:02:28.839244  212383 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-609438 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-609438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 14:02:28.839314  212383 ssh_runner.go:195] Run: sudo crictl info
	I1124 14:02:28.874904  212383 cni.go:84] Creating CNI manager for ""
	I1124 14:02:28.874924  212383 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 14:02:28.874940  212383 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 14:02:28.874963  212383 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-609438 NodeName:default-k8s-diff-port-609438 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:02:28.875085  212383 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-609438"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:02:28.875154  212383 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 14:02:28.884597  212383 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:02:28.884669  212383 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:02:25.714459  213570 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 14:02:25.714725  213570 start.go:159] libmachine.API.Create for "embed-certs-593634" (driver="docker")
	I1124 14:02:25.714819  213570 client.go:173] LocalClient.Create starting
	I1124 14:02:25.714954  213570 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem
	I1124 14:02:25.715008  213570 main.go:143] libmachine: Decoding PEM data...
	I1124 14:02:25.715051  213570 main.go:143] libmachine: Parsing certificate...
	I1124 14:02:25.715148  213570 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem
	I1124 14:02:25.715206  213570 main.go:143] libmachine: Decoding PEM data...
	I1124 14:02:25.715261  213570 main.go:143] libmachine: Parsing certificate...
	I1124 14:02:25.715745  213570 cli_runner.go:164] Run: docker network inspect embed-certs-593634 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 14:02:25.736780  213570 cli_runner.go:211] docker network inspect embed-certs-593634 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 14:02:25.736871  213570 network_create.go:284] running [docker network inspect embed-certs-593634] to gather additional debugging logs...
	I1124 14:02:25.736888  213570 cli_runner.go:164] Run: docker network inspect embed-certs-593634
	W1124 14:02:25.769114  213570 cli_runner.go:211] docker network inspect embed-certs-593634 returned with exit code 1
	I1124 14:02:25.769141  213570 network_create.go:287] error running [docker network inspect embed-certs-593634]: docker network inspect embed-certs-593634: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-593634 not found
	I1124 14:02:25.769154  213570 network_create.go:289] output of [docker network inspect embed-certs-593634]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-593634 not found
	
	** /stderr **
	I1124 14:02:25.769257  213570 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:02:25.800766  213570 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e5e15b13860d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:3d:37:c4:cc:77} reservation:<nil>}
	I1124 14:02:25.801103  213570 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-66593a990bce IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:c0:9b:bc:41:ca} reservation:<nil>}
	I1124 14:02:25.801995  213570 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-37e9fb0954cf IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:32:0b:6f:6e:b2:8c} reservation:<nil>}
	I1124 14:02:25.802424  213570 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e9170}
	I1124 14:02:25.802442  213570 network_create.go:124] attempt to create docker network embed-certs-593634 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1124 14:02:25.802493  213570 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-593634 embed-certs-593634
	I1124 14:02:25.881093  213570 network_create.go:108] docker network embed-certs-593634 192.168.76.0/24 created
	I1124 14:02:25.881122  213570 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-593634" container
	I1124 14:02:25.881203  213570 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 14:02:25.903081  213570 cli_runner.go:164] Run: docker volume create embed-certs-593634 --label name.minikube.sigs.k8s.io=embed-certs-593634 --label created_by.minikube.sigs.k8s.io=true
	I1124 14:02:25.931462  213570 oci.go:103] Successfully created a docker volume embed-certs-593634
	I1124 14:02:25.931542  213570 cli_runner.go:164] Run: docker run --rm --name embed-certs-593634-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-593634 --entrypoint /usr/bin/test -v embed-certs-593634:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 14:02:26.581166  213570 oci.go:107] Successfully prepared a docker volume embed-certs-593634
	I1124 14:02:26.581232  213570 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 14:02:26.581244  213570 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 14:02:26.581311  213570 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-593634:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 14:02:28.894421  212383 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1124 14:02:28.909480  212383 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:02:28.924519  212383 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2241 bytes)
	I1124 14:02:28.939585  212383 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 14:02:28.943813  212383 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:02:28.954534  212383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:02:29.104027  212383 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:02:29.125453  212383 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438 for IP: 192.168.85.2
	I1124 14:02:29.125476  212383 certs.go:195] generating shared ca certs ...
	I1124 14:02:29.125503  212383 certs.go:227] acquiring lock for ca certs: {Name:mkcd8707c782acde0e57168c044a3df942dc4ab1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:29.125641  212383 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-2368/.minikube/ca.key
	I1124 14:02:29.125695  212383 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.key
	I1124 14:02:29.125707  212383 certs.go:257] generating profile certs ...
	I1124 14:02:29.125768  212383 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/client.key
	I1124 14:02:29.125789  212383 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/client.crt with IP's: []
	I1124 14:02:29.324459  212383 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/client.crt ...
	I1124 14:02:29.324491  212383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/client.crt: {Name:mk8aada29dd487d5091685276369440b7d624321 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:29.324640  212383 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/client.key ...
	I1124 14:02:29.324656  212383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/client.key: {Name:mka039edce6f440d55864b8259b2b6e6a4166f35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:29.324742  212383 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.key.0b070d75
	I1124 14:02:29.324762  212383 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.crt.0b070d75 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 14:02:29.388053  212383 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.crt.0b070d75 ...
	I1124 14:02:29.388089  212383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.crt.0b070d75: {Name:mk8c33f3dd28832381eccdbc39352bbcf3fad513 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:29.388234  212383 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.key.0b070d75 ...
	I1124 14:02:29.388250  212383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.key.0b070d75: {Name:mk1a2d7229ced6b28d71658195699ecc4e6d6cbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:29.388323  212383 certs.go:382] copying /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.crt.0b070d75 -> /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.crt
	I1124 14:02:29.388407  212383 certs.go:386] copying /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.key.0b070d75 -> /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.key
	I1124 14:02:29.388467  212383 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.key
	I1124 14:02:29.388494  212383 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.crt with IP's: []
	I1124 14:02:29.607942  212383 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.crt ...
	I1124 14:02:29.607978  212383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.crt: {Name:mkf0227a8560a7238360c53d12e60293f9779f1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:29.608133  212383 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.key ...
	I1124 14:02:29.608148  212383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.key: {Name:mkdb69944b7ff660a91a53e6ae6208e817233479 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:29.608326  212383 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178.pem (1338 bytes)
	W1124 14:02:29.608368  212383 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178_empty.pem, impossibly tiny 0 bytes
	I1124 14:02:29.608383  212383 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 14:02:29.608412  212383 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem (1082 bytes)
	I1124 14:02:29.608442  212383 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:02:29.608468  212383 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem (1679 bytes)
	I1124 14:02:29.608515  212383 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem (1708 bytes)
	I1124 14:02:29.609076  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:02:29.626013  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 14:02:29.643798  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:02:29.661375  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 14:02:29.679743  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1124 14:02:29.696528  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 14:02:29.728013  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:02:29.773516  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 14:02:29.805187  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178.pem --> /usr/share/ca-certificates/4178.pem (1338 bytes)
	I1124 14:02:29.826865  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem --> /usr/share/ca-certificates/41782.pem (1708 bytes)
	I1124 14:02:29.847529  212383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:02:29.867886  212383 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:02:29.882919  212383 ssh_runner.go:195] Run: openssl version
	I1124 14:02:29.889477  212383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41782.pem && ln -fs /usr/share/ca-certificates/41782.pem /etc/ssl/certs/41782.pem"
	I1124 14:02:29.898302  212383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41782.pem
	I1124 14:02:29.904667  212383 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:21 /usr/share/ca-certificates/41782.pem
	I1124 14:02:29.904736  212383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41782.pem
	I1124 14:02:29.948420  212383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:02:29.957558  212383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:02:29.966733  212383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:02:29.970899  212383 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:02:29.970989  212383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:02:30.019996  212383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:02:30.030890  212383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4178.pem && ln -fs /usr/share/ca-certificates/4178.pem /etc/ssl/certs/4178.pem"
	I1124 14:02:30.057890  212383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4178.pem
	I1124 14:02:30.080661  212383 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:21 /usr/share/ca-certificates/4178.pem
	I1124 14:02:30.080813  212383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4178.pem
	I1124 14:02:30.155115  212383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4178.pem /etc/ssl/certs/51391683.0"
	I1124 14:02:30.165475  212383 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:02:30.170978  212383 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 14:02:30.171035  212383 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-609438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-609438 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:02:30.171124  212383 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 14:02:30.171192  212383 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:02:30.211462  212383 cri.go:89] found id: ""
	I1124 14:02:30.211552  212383 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:02:30.226907  212383 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 14:02:30.236649  212383 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 14:02:30.236720  212383 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 14:02:30.248370  212383 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 14:02:30.248462  212383 kubeadm.go:158] found existing configuration files:
	
	I1124 14:02:30.248548  212383 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1124 14:02:30.262084  212383 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 14:02:30.262152  212383 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 14:02:30.270330  212383 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1124 14:02:30.279476  212383 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 14:02:30.279543  212383 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 14:02:30.288703  212383 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1124 14:02:30.297950  212383 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 14:02:30.298023  212383 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 14:02:30.310718  212383 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1124 14:02:30.320531  212383 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 14:02:30.320603  212383 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 14:02:30.329639  212383 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 14:02:30.406424  212383 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 14:02:30.406661  212383 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 14:02:30.479025  212383 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 14:02:31.562417  213570 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-593634:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.981062358s)
	I1124 14:02:31.562447  213570 kic.go:203] duration metric: took 4.981201018s to extract preloaded images to volume ...
	W1124 14:02:31.562585  213570 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 14:02:31.562696  213570 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 14:02:31.653956  213570 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-593634 --name embed-certs-593634 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-593634 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-593634 --network embed-certs-593634 --ip 192.168.76.2 --volume embed-certs-593634:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 14:02:32.104099  213570 cli_runner.go:164] Run: docker container inspect embed-certs-593634 --format={{.State.Running}}
	I1124 14:02:32.133617  213570 cli_runner.go:164] Run: docker container inspect embed-certs-593634 --format={{.State.Status}}
	I1124 14:02:32.170125  213570 cli_runner.go:164] Run: docker exec embed-certs-593634 stat /var/lib/dpkg/alternatives/iptables
	I1124 14:02:32.243591  213570 oci.go:144] the created container "embed-certs-593634" has a running status.
	I1124 14:02:32.243619  213570 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa...
	I1124 14:02:33.008353  213570 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 14:02:33.030437  213570 cli_runner.go:164] Run: docker container inspect embed-certs-593634 --format={{.State.Status}}
	I1124 14:02:33.051118  213570 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 14:02:33.051142  213570 kic_runner.go:114] Args: [docker exec --privileged embed-certs-593634 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 14:02:33.146272  213570 cli_runner.go:164] Run: docker container inspect embed-certs-593634 --format={{.State.Status}}
	I1124 14:02:33.172981  213570 machine.go:94] provisionDockerMachine start ...
	I1124 14:02:33.173175  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:33.203273  213570 main.go:143] libmachine: Using SSH client type: native
	I1124 14:02:33.203611  213570 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1124 14:02:33.203620  213570 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:02:33.204370  213570 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 14:02:36.376430  213570 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-593634
	
	I1124 14:02:36.376458  213570 ubuntu.go:182] provisioning hostname "embed-certs-593634"
	I1124 14:02:36.376538  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:36.401139  213570 main.go:143] libmachine: Using SSH client type: native
	I1124 14:02:36.401453  213570 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1124 14:02:36.401469  213570 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-593634 && echo "embed-certs-593634" | sudo tee /etc/hostname
	I1124 14:02:36.589650  213570 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-593634
	
	I1124 14:02:36.589799  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:36.618006  213570 main.go:143] libmachine: Using SSH client type: native
	I1124 14:02:36.618310  213570 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1124 14:02:36.618326  213570 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-593634' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-593634/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-593634' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:02:36.779947  213570 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:02:36.780024  213570 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-2368/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-2368/.minikube}
	I1124 14:02:36.780065  213570 ubuntu.go:190] setting up certificates
	I1124 14:02:36.780107  213570 provision.go:84] configureAuth start
	I1124 14:02:36.780202  213570 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-593634
	I1124 14:02:36.805555  213570 provision.go:143] copyHostCerts
	I1124 14:02:36.805621  213570 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem, removing ...
	I1124 14:02:36.805629  213570 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem
	I1124 14:02:36.805706  213570 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem (1082 bytes)
	I1124 14:02:36.805804  213570 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem, removing ...
	I1124 14:02:36.805809  213570 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem
	I1124 14:02:36.805834  213570 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem (1123 bytes)
	I1124 14:02:36.805881  213570 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem, removing ...
	I1124 14:02:36.805885  213570 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem
	I1124 14:02:36.805907  213570 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem (1679 bytes)
	I1124 14:02:36.805955  213570 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem org=jenkins.embed-certs-593634 san=[127.0.0.1 192.168.76.2 embed-certs-593634 localhost minikube]
	I1124 14:02:37.074442  213570 provision.go:177] copyRemoteCerts
	I1124 14:02:37.074519  213570 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:02:37.074565  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:37.105113  213570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa Username:docker}
	I1124 14:02:37.228963  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 14:02:37.249359  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 14:02:37.269580  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1124 14:02:37.289369  213570 provision.go:87] duration metric: took 509.223197ms to configureAuth
	I1124 14:02:37.289401  213570 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:02:37.289587  213570 config.go:182] Loaded profile config "embed-certs-593634": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 14:02:37.289602  213570 machine.go:97] duration metric: took 4.11660352s to provisionDockerMachine
	I1124 14:02:37.289609  213570 client.go:176] duration metric: took 11.57476669s to LocalClient.Create
	I1124 14:02:37.289629  213570 start.go:167] duration metric: took 11.574903397s to libmachine.API.Create "embed-certs-593634"
	I1124 14:02:37.289636  213570 start.go:293] postStartSetup for "embed-certs-593634" (driver="docker")
	I1124 14:02:37.289644  213570 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:02:37.289700  213570 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:02:37.289746  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:37.313497  213570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa Username:docker}
	I1124 14:02:37.421261  213570 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:02:37.425376  213570 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:02:37.425402  213570 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:02:37.425413  213570 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2368/.minikube/addons for local assets ...
	I1124 14:02:37.425467  213570 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2368/.minikube/files for local assets ...
	I1124 14:02:37.425546  213570 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem -> 41782.pem in /etc/ssl/certs
	I1124 14:02:37.425648  213570 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:02:37.434170  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem --> /etc/ssl/certs/41782.pem (1708 bytes)
	I1124 14:02:37.454297  213570 start.go:296] duration metric: took 164.646825ms for postStartSetup
	I1124 14:02:37.454768  213570 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-593634
	I1124 14:02:37.473090  213570 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/config.json ...
	I1124 14:02:37.473375  213570 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 14:02:37.473419  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:37.492467  213570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa Username:docker}
	I1124 14:02:37.597996  213570 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 14:02:37.603374  213570 start.go:128] duration metric: took 11.892207017s to createHost
	I1124 14:02:37.603402  213570 start.go:83] releasing machines lock for "embed-certs-593634", held for 11.892340336s
	I1124 14:02:37.603491  213570 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-593634
	I1124 14:02:37.622681  213570 ssh_runner.go:195] Run: cat /version.json
	I1124 14:02:37.622739  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:37.622988  213570 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:02:37.623049  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:02:37.653121  213570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa Username:docker}
	I1124 14:02:37.661266  213570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa Username:docker}
	I1124 14:02:37.867529  213570 ssh_runner.go:195] Run: systemctl --version
	I1124 14:02:37.880289  213570 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:02:37.885513  213570 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:02:37.885586  213570 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:02:37.919967  213570 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 14:02:37.920041  213570 start.go:496] detecting cgroup driver to use...
	I1124 14:02:37.920090  213570 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 14:02:37.920196  213570 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 14:02:37.939855  213570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 14:02:37.954765  213570 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:02:37.954832  213570 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:02:37.973211  213570 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:02:37.993531  213570 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:02:38.152217  213570 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:02:38.315244  213570 docker.go:234] disabling docker service ...
	I1124 14:02:38.315315  213570 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:02:38.342606  213570 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:02:38.357435  213570 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:02:38.501143  213570 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:02:38.653968  213570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:02:38.670062  213570 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:02:38.691612  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 14:02:38.701736  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 14:02:38.711955  213570 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1124 14:02:38.712108  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1124 14:02:38.722429  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 14:02:38.732416  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 14:02:38.742370  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 14:02:38.752386  213570 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:02:38.761548  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 14:02:38.771322  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 14:02:38.781079  213570 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 14:02:38.790804  213570 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:02:38.799605  213570 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:02:38.808384  213570 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:02:38.957014  213570 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 14:02:39.134468  213570 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 14:02:39.134589  213570 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 14:02:39.138612  213570 start.go:564] Will wait 60s for crictl version
	I1124 14:02:39.138728  213570 ssh_runner.go:195] Run: which crictl
	I1124 14:02:39.142835  213570 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 14:02:39.183049  213570 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 14:02:39.183127  213570 ssh_runner.go:195] Run: containerd --version
	I1124 14:02:39.209644  213570 ssh_runner.go:195] Run: containerd --version
	I1124 14:02:39.242563  213570 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1124 14:02:39.245632  213570 cli_runner.go:164] Run: docker network inspect embed-certs-593634 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:02:39.261116  213570 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 14:02:39.265349  213570 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:02:39.275060  213570 kubeadm.go:884] updating cluster {Name:embed-certs-593634 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-593634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:02:39.275179  213570 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 14:02:39.275240  213570 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:02:39.309584  213570 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 14:02:39.309604  213570 containerd.go:534] Images already preloaded, skipping extraction
	I1124 14:02:39.309666  213570 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:02:39.338298  213570 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 14:02:39.338369  213570 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:02:39.338391  213570 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1124 14:02:39.338540  213570 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-593634 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-593634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 14:02:39.338638  213570 ssh_runner.go:195] Run: sudo crictl info
	I1124 14:02:39.374509  213570 cni.go:84] Creating CNI manager for ""
	I1124 14:02:39.374529  213570 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 14:02:39.374546  213570 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 14:02:39.374567  213570 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-593634 NodeName:embed-certs-593634 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:02:39.374695  213570 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-593634"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:02:39.374758  213570 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 14:02:39.383722  213570 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:02:39.383790  213570 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:02:39.392664  213570 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1124 14:02:39.407366  213570 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:02:39.421539  213570 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1124 14:02:39.435750  213570 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 14:02:39.439949  213570 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:02:39.450067  213570 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:02:39.594389  213570 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:02:39.612637  213570 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634 for IP: 192.168.76.2
	I1124 14:02:39.612654  213570 certs.go:195] generating shared ca certs ...
	I1124 14:02:39.612670  213570 certs.go:227] acquiring lock for ca certs: {Name:mkcd8707c782acde0e57168c044a3df942dc4ab1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:39.612812  213570 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-2368/.minikube/ca.key
	I1124 14:02:39.612861  213570 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.key
	I1124 14:02:39.612868  213570 certs.go:257] generating profile certs ...
	I1124 14:02:39.612921  213570 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/client.key
	I1124 14:02:39.612933  213570 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/client.crt with IP's: []
	I1124 14:02:39.743608  213570 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/client.crt ...
	I1124 14:02:39.743688  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/client.crt: {Name:mkdc127047d7bba99c4ff0de010fa76eaa96351a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:39.743978  213570 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/client.key ...
	I1124 14:02:39.744016  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/client.key: {Name:mk5b65ad154f9ff1864bd2678d53c0d49d42b626 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:39.744181  213570 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.key.20c14e55
	I1124 14:02:39.744223  213570 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.crt.20c14e55 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 14:02:39.792416  213570 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.crt.20c14e55 ...
	I1124 14:02:39.792488  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.crt.20c14e55: {Name:mk898939d3f887dee7ec2cb55d4f9f3c1473f371 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:39.792715  213570 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.key.20c14e55 ...
	I1124 14:02:39.792751  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.key.20c14e55: {Name:mk7634950b7d8fc2f57ae8ad6d2b71e2a24db521 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:39.792893  213570 certs.go:382] copying /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.crt.20c14e55 -> /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.crt
	I1124 14:02:39.793035  213570 certs.go:386] copying /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.key.20c14e55 -> /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.key
	I1124 14:02:39.793197  213570 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.key
	I1124 14:02:39.793218  213570 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.crt with IP's: []
	I1124 14:02:40.512550  213570 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.crt ...
	I1124 14:02:40.512590  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.crt: {Name:mk7e59e3c705bb60e30918ea8dec355fb87a4cdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:40.512783  213570 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.key ...
	I1124 14:02:40.512800  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.key: {Name:mk1c28b0bf985e63e205a9d607bdda54b666c8d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:40.512994  213570 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178.pem (1338 bytes)
	W1124 14:02:40.513046  213570 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178_empty.pem, impossibly tiny 0 bytes
	I1124 14:02:40.513055  213570 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 14:02:40.513084  213570 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem (1082 bytes)
	I1124 14:02:40.513116  213570 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:02:40.513155  213570 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem (1679 bytes)
	I1124 14:02:40.513205  213570 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem (1708 bytes)
	I1124 14:02:40.513807  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:02:40.534476  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 14:02:40.554772  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:02:40.573041  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 14:02:40.592563  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 14:02:40.610272  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 14:02:40.648106  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:02:40.675421  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/embed-certs-593634/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 14:02:40.712861  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:02:40.741274  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178.pem --> /usr/share/ca-certificates/4178.pem (1338 bytes)
	I1124 14:02:40.775540  213570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem --> /usr/share/ca-certificates/41782.pem (1708 bytes)
	I1124 14:02:40.810151  213570 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:02:40.834734  213570 ssh_runner.go:195] Run: openssl version
	I1124 14:02:40.841134  213570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4178.pem && ln -fs /usr/share/ca-certificates/4178.pem /etc/ssl/certs/4178.pem"
	I1124 14:02:40.853029  213570 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4178.pem
	I1124 14:02:40.860558  213570 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:21 /usr/share/ca-certificates/4178.pem
	I1124 14:02:40.860626  213570 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4178.pem
	I1124 14:02:40.918401  213570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4178.pem /etc/ssl/certs/51391683.0"
	I1124 14:02:40.928700  213570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41782.pem && ln -fs /usr/share/ca-certificates/41782.pem /etc/ssl/certs/41782.pem"
	I1124 14:02:40.943881  213570 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41782.pem
	I1124 14:02:40.948767  213570 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:21 /usr/share/ca-certificates/41782.pem
	I1124 14:02:40.948833  213570 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41782.pem
	I1124 14:02:41.014703  213570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:02:41.026160  213570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:02:41.039512  213570 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:02:41.046666  213570 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:02:41.046734  213570 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:02:41.111180  213570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:02:41.121762  213570 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:02:41.128022  213570 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 14:02:41.128075  213570 kubeadm.go:401] StartCluster: {Name:embed-certs-593634 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-593634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:02:41.128164  213570 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 14:02:41.128228  213570 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:02:41.181954  213570 cri.go:89] found id: ""
	I1124 14:02:41.182043  213570 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:02:41.192535  213570 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 14:02:41.201483  213570 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 14:02:41.201548  213570 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 14:02:41.210919  213570 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 14:02:41.210940  213570 kubeadm.go:158] found existing configuration files:
	
	I1124 14:02:41.210999  213570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 14:02:41.223268  213570 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 14:02:41.223332  213570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 14:02:41.239377  213570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 14:02:41.251095  213570 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 14:02:41.251165  213570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 14:02:41.259252  213570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 14:02:41.268559  213570 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 14:02:41.268620  213570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 14:02:41.282438  213570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 14:02:41.293894  213570 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 14:02:41.293975  213570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 14:02:41.321578  213570 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 14:02:41.440101  213570 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 14:02:41.445250  213570 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 14:02:41.492866  213570 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 14:02:41.499280  213570 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 14:02:41.499334  213570 kubeadm.go:319] OS: Linux
	I1124 14:02:41.499382  213570 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 14:02:41.499444  213570 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 14:02:41.499504  213570 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 14:02:41.499557  213570 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 14:02:41.499612  213570 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 14:02:41.499666  213570 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 14:02:41.499716  213570 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 14:02:41.499769  213570 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 14:02:41.499820  213570 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 14:02:41.625341  213570 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 14:02:41.625456  213570 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 14:02:41.625558  213570 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 14:02:41.636268  213570 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 14:02:41.641768  213570 out.go:252]   - Generating certificates and keys ...
	I1124 14:02:41.641865  213570 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 14:02:41.641939  213570 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 14:02:42.619223  213570 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 14:02:43.011953  213570 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 14:02:43.483393  213570 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 14:02:43.810126  213570 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 14:02:44.825951  213570 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 14:02:44.828294  213570 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-593634 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 14:02:45.647118  213570 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 14:02:45.647643  213570 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-593634 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 14:02:45.905141  213570 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 14:02:46.000202  213570 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 14:02:46.120215  213570 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 14:02:46.120734  213570 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 14:02:46.900838  213570 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 14:02:47.805102  213570 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 14:02:48.517833  213570 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 14:02:49.348256  213570 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 14:02:49.516941  213570 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 14:02:49.518037  213570 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 14:02:49.520983  213570 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 14:02:49.523689  213570 out.go:252]   - Booting up control plane ...
	I1124 14:02:49.523845  213570 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 14:02:49.523973  213570 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 14:02:49.525837  213570 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 14:02:49.554261  213570 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 14:02:49.554370  213570 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 14:02:49.565946  213570 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 14:02:49.567436  213570 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 14:02:49.571311  213570 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 14:02:49.806053  213570 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 14:02:49.806172  213570 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 14:02:52.457159  212383 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 14:02:52.457215  212383 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 14:02:52.457303  212383 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 14:02:52.457359  212383 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 14:02:52.457393  212383 kubeadm.go:319] OS: Linux
	I1124 14:02:52.457438  212383 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 14:02:52.457486  212383 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 14:02:52.457532  212383 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 14:02:52.457580  212383 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 14:02:52.457628  212383 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 14:02:52.457682  212383 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 14:02:52.457728  212383 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 14:02:52.457775  212383 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 14:02:52.457821  212383 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 14:02:52.457893  212383 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 14:02:52.457987  212383 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 14:02:52.458077  212383 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 14:02:52.458138  212383 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 14:02:52.461386  212383 out.go:252]   - Generating certificates and keys ...
	I1124 14:02:52.461491  212383 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 14:02:52.461556  212383 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 14:02:52.461623  212383 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 14:02:52.461680  212383 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 14:02:52.461741  212383 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 14:02:52.461791  212383 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 14:02:52.461845  212383 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 14:02:52.461977  212383 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-609438 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 14:02:52.462028  212383 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 14:02:52.462157  212383 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-609438 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 14:02:52.462223  212383 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 14:02:52.462287  212383 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 14:02:52.462339  212383 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 14:02:52.462402  212383 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 14:02:52.462458  212383 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 14:02:52.462521  212383 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 14:02:52.462611  212383 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 14:02:52.462674  212383 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 14:02:52.462729  212383 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 14:02:52.462820  212383 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 14:02:52.462893  212383 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 14:02:52.465845  212383 out.go:252]   - Booting up control plane ...
	I1124 14:02:52.466035  212383 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 14:02:52.466163  212383 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 14:02:52.466242  212383 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 14:02:52.466364  212383 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 14:02:52.466465  212383 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 14:02:52.466577  212383 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 14:02:52.466668  212383 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 14:02:52.466709  212383 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 14:02:52.466848  212383 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 14:02:52.466960  212383 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 14:02:52.467024  212383 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.018392479s
	I1124 14:02:52.467123  212383 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 14:02:52.467209  212383 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1124 14:02:52.467305  212383 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 14:02:52.467389  212383 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 14:02:52.467470  212383 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.741501846s
	I1124 14:02:52.467552  212383 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.503243598s
	I1124 14:02:52.467627  212383 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 10.824874472s
	I1124 14:02:52.467741  212383 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 14:02:52.467875  212383 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 14:02:52.467955  212383 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 14:02:52.468176  212383 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-609438 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 14:02:52.468237  212383 kubeadm.go:319] [bootstrap-token] Using token: vzq4ay.serxkml6gk1378wv
	I1124 14:02:52.471358  212383 out.go:252]   - Configuring RBAC rules ...
	I1124 14:02:52.471499  212383 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 14:02:52.471591  212383 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 14:02:52.471743  212383 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 14:02:52.471880  212383 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 14:02:52.472017  212383 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 14:02:52.472112  212383 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 14:02:52.472236  212383 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 14:02:52.472282  212383 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 14:02:52.472331  212383 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 14:02:52.472335  212383 kubeadm.go:319] 
	I1124 14:02:52.472400  212383 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 14:02:52.472411  212383 kubeadm.go:319] 
	I1124 14:02:52.472495  212383 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 14:02:52.472499  212383 kubeadm.go:319] 
	I1124 14:02:52.472526  212383 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 14:02:52.472589  212383 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 14:02:52.472643  212383 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 14:02:52.472647  212383 kubeadm.go:319] 
	I1124 14:02:52.472705  212383 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 14:02:52.472709  212383 kubeadm.go:319] 
	I1124 14:02:52.472759  212383 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 14:02:52.472763  212383 kubeadm.go:319] 
	I1124 14:02:52.472819  212383 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 14:02:52.472899  212383 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 14:02:52.472973  212383 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 14:02:52.472976  212383 kubeadm.go:319] 
	I1124 14:02:52.473067  212383 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 14:02:52.473150  212383 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 14:02:52.473154  212383 kubeadm.go:319] 
	I1124 14:02:52.473251  212383 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token vzq4ay.serxkml6gk1378wv \
	I1124 14:02:52.473364  212383 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aa948289582a95f47bab77808ca51e5d74f41a914fe1740ab9448815f8011aac \
	I1124 14:02:52.473385  212383 kubeadm.go:319] 	--control-plane 
	I1124 14:02:52.473389  212383 kubeadm.go:319] 
	I1124 14:02:52.473481  212383 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 14:02:52.473484  212383 kubeadm.go:319] 
	I1124 14:02:52.473573  212383 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token vzq4ay.serxkml6gk1378wv \
	I1124 14:02:52.473696  212383 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aa948289582a95f47bab77808ca51e5d74f41a914fe1740ab9448815f8011aac 
	I1124 14:02:52.473705  212383 cni.go:84] Creating CNI manager for ""
	I1124 14:02:52.473711  212383 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 14:02:52.476852  212383 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 14:02:52.479922  212383 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 14:02:52.489605  212383 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 14:02:52.489623  212383 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 14:02:52.536790  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 14:02:53.413438  212383 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 14:02:53.413571  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:53.413654  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-609438 minikube.k8s.io/updated_at=2025_11_24T14_02_53_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=default-k8s-diff-port-609438 minikube.k8s.io/primary=true
	I1124 14:02:53.507283  212383 ops.go:34] apiserver oom_adj: -16
	I1124 14:02:53.863033  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:50.808351  213570 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002003298s
	I1124 14:02:50.815187  213570 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 14:02:50.815743  213570 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1124 14:02:50.816608  213570 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 14:02:50.818559  213570 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 14:02:54.363074  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:54.863777  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:55.363086  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:55.863114  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:56.363110  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:56.863441  212383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:02:57.057097  212383 kubeadm.go:1114] duration metric: took 3.643574546s to wait for elevateKubeSystemPrivileges
	I1124 14:02:57.057124  212383 kubeadm.go:403] duration metric: took 26.886093324s to StartCluster
	I1124 14:02:57.057141  212383 settings.go:142] acquiring lock: {Name:mk2b0bbff4d8ced468f457362668d43b813dc062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:57.057204  212383 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-2368/kubeconfig
	I1124 14:02:57.057903  212383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/kubeconfig: {Name:mk246d21eaffbd8aca2abdc1b2f89d6fcc902f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:02:57.058100  212383 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 14:02:57.058223  212383 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 14:02:57.058472  212383 config.go:182] Loaded profile config "default-k8s-diff-port-609438": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 14:02:57.058507  212383 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:02:57.058563  212383 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-609438"
	I1124 14:02:57.058577  212383 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-609438"
	I1124 14:02:57.058598  212383 host.go:66] Checking if "default-k8s-diff-port-609438" exists ...
	I1124 14:02:57.059105  212383 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-609438 --format={{.State.Status}}
	I1124 14:02:57.059672  212383 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-609438"
	I1124 14:02:57.059698  212383 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-609438"
	I1124 14:02:57.060034  212383 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-609438 --format={{.State.Status}}
	I1124 14:02:57.062295  212383 out.go:179] * Verifying Kubernetes components...
	I1124 14:02:57.067608  212383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:02:57.096470  212383 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 14:02:57.100431  212383 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:02:57.100453  212383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:02:57.100520  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:57.108007  212383 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-609438"
	I1124 14:02:57.108047  212383 host.go:66] Checking if "default-k8s-diff-port-609438" exists ...
	I1124 14:02:57.108469  212383 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-609438 --format={{.State.Status}}
	I1124 14:02:57.150290  212383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa Username:docker}
	I1124 14:02:57.151191  212383 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 14:02:57.151207  212383 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 14:02:57.151270  212383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-609438
	I1124 14:02:57.180229  212383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/default-k8s-diff-port-609438/id_rsa Username:docker}
	I1124 14:02:57.835181  212383 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:02:57.835375  212383 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 14:02:57.843296  212383 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 14:02:58.048720  212383 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:02:55.577519  213570 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.75919955s
	I1124 14:02:57.488695  213570 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.669631688s
	I1124 14:02:59.319576  213570 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.503330978s
	I1124 14:02:59.347736  213570 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 14:02:59.365960  213570 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 14:02:59.389045  213570 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 14:02:59.389257  213570 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-593634 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 14:02:59.404075  213570 kubeadm.go:319] [bootstrap-token] Using token: sdluey.txxijid8fmo5jyau
	I1124 14:02:59.018640  212383 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.183422592s)
	I1124 14:02:59.019392  212383 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-609438" to be "Ready" ...
	I1124 14:02:59.019719  212383 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.176349884s)
	I1124 14:02:59.020165  212383 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.184766141s)
	I1124 14:02:59.020204  212383 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 14:02:59.505284  212383 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.456466205s)
	I1124 14:02:59.508376  212383 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1124 14:02:59.407186  213570 out.go:252]   - Configuring RBAC rules ...
	I1124 14:02:59.407326  213570 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 14:02:59.413876  213570 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 14:02:59.424114  213570 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 14:02:59.429247  213570 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 14:02:59.435888  213570 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 14:02:59.441214  213570 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 14:02:59.729166  213570 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 14:03:00.281783  213570 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 14:03:00.726578  213570 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 14:03:00.731583  213570 kubeadm.go:319] 
	I1124 14:03:00.731683  213570 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 14:03:00.731705  213570 kubeadm.go:319] 
	I1124 14:03:00.731783  213570 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 14:03:00.731791  213570 kubeadm.go:319] 
	I1124 14:03:00.731817  213570 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 14:03:00.731879  213570 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 14:03:00.731955  213570 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 14:03:00.731964  213570 kubeadm.go:319] 
	I1124 14:03:00.732019  213570 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 14:03:00.732029  213570 kubeadm.go:319] 
	I1124 14:03:00.732077  213570 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 14:03:00.732085  213570 kubeadm.go:319] 
	I1124 14:03:00.732143  213570 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 14:03:00.732222  213570 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 14:03:00.732296  213570 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 14:03:00.732305  213570 kubeadm.go:319] 
	I1124 14:03:00.732391  213570 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 14:03:00.732470  213570 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 14:03:00.732477  213570 kubeadm.go:319] 
	I1124 14:03:00.732562  213570 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token sdluey.txxijid8fmo5jyau \
	I1124 14:03:00.732674  213570 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aa948289582a95f47bab77808ca51e5d74f41a914fe1740ab9448815f8011aac \
	I1124 14:03:00.732700  213570 kubeadm.go:319] 	--control-plane 
	I1124 14:03:00.732708  213570 kubeadm.go:319] 
	I1124 14:03:00.732793  213570 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 14:03:00.732801  213570 kubeadm.go:319] 
	I1124 14:03:00.732883  213570 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token sdluey.txxijid8fmo5jyau \
	I1124 14:03:00.732989  213570 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aa948289582a95f47bab77808ca51e5d74f41a914fe1740ab9448815f8011aac 
	I1124 14:03:00.734466  213570 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 14:03:00.734704  213570 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 14:03:00.734818  213570 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 14:03:00.734840  213570 cni.go:84] Creating CNI manager for ""
	I1124 14:03:00.734847  213570 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 14:03:00.738356  213570 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 14:02:59.511261  212383 addons.go:530] duration metric: took 2.452743621s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1124 14:02:59.527883  212383 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-609438" context rescaled to 1 replicas
	W1124 14:03:01.022799  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:03.522484  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	I1124 14:03:00.741285  213570 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 14:03:00.747200  213570 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 14:03:00.747222  213570 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 14:03:00.762942  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 14:03:01.083756  213570 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 14:03:01.083943  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:01.084029  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-593634 minikube.k8s.io/updated_at=2025_11_24T14_03_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=embed-certs-593634 minikube.k8s.io/primary=true
	I1124 14:03:01.235259  213570 ops.go:34] apiserver oom_adj: -16
	I1124 14:03:01.235388  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:01.736213  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:02.235575  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:02.735531  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:03.235547  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:03.735985  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:04.235605  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:04.735509  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:05.235491  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:05.735597  213570 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 14:03:05.862499  213570 kubeadm.go:1114] duration metric: took 4.778639859s to wait for elevateKubeSystemPrivileges
	I1124 14:03:05.862539  213570 kubeadm.go:403] duration metric: took 24.734468729s to StartCluster
	I1124 14:03:05.862559  213570 settings.go:142] acquiring lock: {Name:mk2b0bbff4d8ced468f457362668d43b813dc062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:03:05.862641  213570 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-2368/kubeconfig
	I1124 14:03:05.864034  213570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/kubeconfig: {Name:mk246d21eaffbd8aca2abdc1b2f89d6fcc902f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:03:05.864291  213570 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 14:03:05.864292  213570 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 14:03:05.864627  213570 config.go:182] Loaded profile config "embed-certs-593634": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 14:03:05.864675  213570 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:03:05.864760  213570 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-593634"
	I1124 14:03:05.864775  213570 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-593634"
	I1124 14:03:05.864814  213570 host.go:66] Checking if "embed-certs-593634" exists ...
	I1124 14:03:05.865448  213570 cli_runner.go:164] Run: docker container inspect embed-certs-593634 --format={{.State.Status}}
	I1124 14:03:05.865928  213570 addons.go:70] Setting default-storageclass=true in profile "embed-certs-593634"
	I1124 14:03:05.865962  213570 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-593634"
	I1124 14:03:05.866329  213570 cli_runner.go:164] Run: docker container inspect embed-certs-593634 --format={{.State.Status}}
	I1124 14:03:05.867882  213570 out.go:179] * Verifying Kubernetes components...
	I1124 14:03:05.871678  213570 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:03:05.918376  213570 addons.go:239] Setting addon default-storageclass=true in "embed-certs-593634"
	I1124 14:03:05.918427  213570 host.go:66] Checking if "embed-certs-593634" exists ...
	I1124 14:03:05.919006  213570 cli_runner.go:164] Run: docker container inspect embed-certs-593634 --format={{.State.Status}}
	I1124 14:03:05.928779  213570 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 14:03:05.931678  213570 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:03:05.931712  213570 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:03:05.931788  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:03:05.962335  213570 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 14:03:05.962376  213570 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 14:03:05.962476  213570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-593634
	I1124 14:03:05.993403  213570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa Username:docker}
	I1124 14:03:06.003508  213570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/embed-certs-593634/id_rsa Username:docker}
	I1124 14:03:06.391385  213570 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 14:03:06.391488  213570 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:03:06.435021  213570 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 14:03:06.439159  213570 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:03:06.771396  213570 node_ready.go:35] waiting up to 6m0s for node "embed-certs-593634" to be "Ready" ...
	I1124 14:03:06.771837  213570 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1124 14:03:07.089005  213570 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1124 14:03:06.022254  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:08.023381  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	I1124 14:03:07.091942  213570 addons.go:530] duration metric: took 1.22725676s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1124 14:03:07.275615  213570 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-593634" context rescaled to 1 replicas
	W1124 14:03:08.774304  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:10.522868  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:12.525848  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:10.776272  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:13.274310  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:15.274775  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:14.526016  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:17.023060  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:17.774691  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:20.274332  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:19.523467  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:21.524121  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:23.524697  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:22.774276  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:24.775051  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:26.022538  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:28.023018  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:27.274791  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:29.275073  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:30.030420  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:32.524753  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:31.774872  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:34.274493  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:35.023155  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:37.025173  212383 node_ready.go:57] node "default-k8s-diff-port-609438" has "Ready":"False" status (will retry)
	W1124 14:03:36.275275  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:38.774804  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	I1124 14:03:39.023101  212383 node_ready.go:49] node "default-k8s-diff-port-609438" is "Ready"
	I1124 14:03:39.023134  212383 node_ready.go:38] duration metric: took 40.003724122s for node "default-k8s-diff-port-609438" to be "Ready" ...
	I1124 14:03:39.023149  212383 api_server.go:52] waiting for apiserver process to appear ...
	I1124 14:03:39.023211  212383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:03:39.035892  212383 api_server.go:72] duration metric: took 41.977763431s to wait for apiserver process to appear ...
	I1124 14:03:39.035957  212383 api_server.go:88] waiting for apiserver healthz status ...
	I1124 14:03:39.035992  212383 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1124 14:03:39.045601  212383 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1124 14:03:39.046766  212383 api_server.go:141] control plane version: v1.34.1
	I1124 14:03:39.046790  212383 api_server.go:131] duration metric: took 10.8162ms to wait for apiserver health ...
	I1124 14:03:39.046799  212383 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 14:03:39.057366  212383 system_pods.go:59] 8 kube-system pods found
	I1124 14:03:39.057464  212383 system_pods.go:61] "coredns-66bc5c9577-qctbs" [cd79b5ea-c0d8-4b5c-94a4-6743b53cd9de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:03:39.057486  212383 system_pods.go:61] "etcd-default-k8s-diff-port-609438" [3e2d5715-12d7-441e-9747-edb4c6f78577] Running
	I1124 14:03:39.057527  212383 system_pods.go:61] "kindnet-jcqb9" [92836c58-7b28-4b1b-838d-9491cd23823b] Running
	I1124 14:03:39.057552  212383 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-609438" [b6e69d70-9c7f-4b06-8ba8-a37c17d79bb9] Running
	I1124 14:03:39.057573  212383 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-609438" [a1dba2ac-ba3c-4282-966e-c7abffbb6b9a] Running
	I1124 14:03:39.057612  212383 system_pods.go:61] "kube-proxy-frlpg" [814cc9f1-7449-431c-a35d-3ac3b4d05db9] Running
	I1124 14:03:39.057637  212383 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-609438" [a87b6471-6253-4b9c-abd1-83d029df6343] Running
	I1124 14:03:39.057664  212383 system_pods.go:61] "storage-provisioner" [98d7eb97-3a94-4904-9af3-f063689cec40] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:03:39.057702  212383 system_pods.go:74] duration metric: took 10.895381ms to wait for pod list to return data ...
	I1124 14:03:39.057729  212383 default_sa.go:34] waiting for default service account to be created ...
	I1124 14:03:39.068310  212383 default_sa.go:45] found service account: "default"
	I1124 14:03:39.068335  212383 default_sa.go:55] duration metric: took 10.585051ms for default service account to be created ...
	I1124 14:03:39.068346  212383 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 14:03:39.072487  212383 system_pods.go:86] 8 kube-system pods found
	I1124 14:03:39.072578  212383 system_pods.go:89] "coredns-66bc5c9577-qctbs" [cd79b5ea-c0d8-4b5c-94a4-6743b53cd9de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:03:39.072601  212383 system_pods.go:89] "etcd-default-k8s-diff-port-609438" [3e2d5715-12d7-441e-9747-edb4c6f78577] Running
	I1124 14:03:39.072648  212383 system_pods.go:89] "kindnet-jcqb9" [92836c58-7b28-4b1b-838d-9491cd23823b] Running
	I1124 14:03:39.072673  212383 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-609438" [b6e69d70-9c7f-4b06-8ba8-a37c17d79bb9] Running
	I1124 14:03:39.072696  212383 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-609438" [a1dba2ac-ba3c-4282-966e-c7abffbb6b9a] Running
	I1124 14:03:39.072735  212383 system_pods.go:89] "kube-proxy-frlpg" [814cc9f1-7449-431c-a35d-3ac3b4d05db9] Running
	I1124 14:03:39.072761  212383 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-609438" [a87b6471-6253-4b9c-abd1-83d029df6343] Running
	I1124 14:03:39.072785  212383 system_pods.go:89] "storage-provisioner" [98d7eb97-3a94-4904-9af3-f063689cec40] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:03:39.072847  212383 retry.go:31] will retry after 264.799989ms: missing components: kube-dns
	I1124 14:03:39.342534  212383 system_pods.go:86] 8 kube-system pods found
	I1124 14:03:39.342686  212383 system_pods.go:89] "coredns-66bc5c9577-qctbs" [cd79b5ea-c0d8-4b5c-94a4-6743b53cd9de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:03:39.342725  212383 system_pods.go:89] "etcd-default-k8s-diff-port-609438" [3e2d5715-12d7-441e-9747-edb4c6f78577] Running
	I1124 14:03:39.342754  212383 system_pods.go:89] "kindnet-jcqb9" [92836c58-7b28-4b1b-838d-9491cd23823b] Running
	I1124 14:03:39.342775  212383 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-609438" [b6e69d70-9c7f-4b06-8ba8-a37c17d79bb9] Running
	I1124 14:03:39.342816  212383 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-609438" [a1dba2ac-ba3c-4282-966e-c7abffbb6b9a] Running
	I1124 14:03:39.342842  212383 system_pods.go:89] "kube-proxy-frlpg" [814cc9f1-7449-431c-a35d-3ac3b4d05db9] Running
	I1124 14:03:39.342864  212383 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-609438" [a87b6471-6253-4b9c-abd1-83d029df6343] Running
	I1124 14:03:39.342912  212383 system_pods.go:89] "storage-provisioner" [98d7eb97-3a94-4904-9af3-f063689cec40] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:03:39.342941  212383 retry.go:31] will retry after 272.670872ms: missing components: kube-dns
	I1124 14:03:39.626215  212383 system_pods.go:86] 8 kube-system pods found
	I1124 14:03:39.626242  212383 system_pods.go:89] "coredns-66bc5c9577-qctbs" [cd79b5ea-c0d8-4b5c-94a4-6743b53cd9de] Running
	I1124 14:03:39.626248  212383 system_pods.go:89] "etcd-default-k8s-diff-port-609438" [3e2d5715-12d7-441e-9747-edb4c6f78577] Running
	I1124 14:03:39.626254  212383 system_pods.go:89] "kindnet-jcqb9" [92836c58-7b28-4b1b-838d-9491cd23823b] Running
	I1124 14:03:39.626258  212383 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-609438" [b6e69d70-9c7f-4b06-8ba8-a37c17d79bb9] Running
	I1124 14:03:39.626271  212383 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-609438" [a1dba2ac-ba3c-4282-966e-c7abffbb6b9a] Running
	I1124 14:03:39.626274  212383 system_pods.go:89] "kube-proxy-frlpg" [814cc9f1-7449-431c-a35d-3ac3b4d05db9] Running
	I1124 14:03:39.626278  212383 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-609438" [a87b6471-6253-4b9c-abd1-83d029df6343] Running
	I1124 14:03:39.626282  212383 system_pods.go:89] "storage-provisioner" [98d7eb97-3a94-4904-9af3-f063689cec40] Running
	I1124 14:03:39.626289  212383 system_pods.go:126] duration metric: took 557.937565ms to wait for k8s-apps to be running ...
	I1124 14:03:39.626297  212383 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 14:03:39.626351  212383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:03:39.649756  212383 system_svc.go:56] duration metric: took 23.432209ms WaitForService to wait for kubelet
	I1124 14:03:39.649833  212383 kubeadm.go:587] duration metric: took 42.591709093s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:03:39.649867  212383 node_conditions.go:102] verifying NodePressure condition ...
	I1124 14:03:39.658388  212383 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 14:03:39.658418  212383 node_conditions.go:123] node cpu capacity is 2
	I1124 14:03:39.658433  212383 node_conditions.go:105] duration metric: took 8.545281ms to run NodePressure ...
	I1124 14:03:39.658445  212383 start.go:242] waiting for startup goroutines ...
	I1124 14:03:39.658453  212383 start.go:247] waiting for cluster config update ...
	I1124 14:03:39.658464  212383 start.go:256] writing updated cluster config ...
	I1124 14:03:39.658759  212383 ssh_runner.go:195] Run: rm -f paused
	I1124 14:03:39.662925  212383 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:03:39.668038  212383 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qctbs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:39.673734  212383 pod_ready.go:94] pod "coredns-66bc5c9577-qctbs" is "Ready"
	I1124 14:03:39.673815  212383 pod_ready.go:86] duration metric: took 5.694049ms for pod "coredns-66bc5c9577-qctbs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:39.676472  212383 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:39.685362  212383 pod_ready.go:94] pod "etcd-default-k8s-diff-port-609438" is "Ready"
	I1124 14:03:39.685439  212383 pod_ready.go:86] duration metric: took 8.894816ms for pod "etcd-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:39.688312  212383 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:39.695577  212383 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-609438" is "Ready"
	I1124 14:03:39.695663  212383 pod_ready.go:86] duration metric: took 7.234136ms for pod "kube-apiserver-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:39.698560  212383 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:40.070303  212383 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-609438" is "Ready"
	I1124 14:03:40.070379  212383 pod_ready.go:86] duration metric: took 371.738474ms for pod "kube-controller-manager-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:40.267521  212383 pod_ready.go:83] waiting for pod "kube-proxy-frlpg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:40.667723  212383 pod_ready.go:94] pod "kube-proxy-frlpg" is "Ready"
	I1124 14:03:40.667753  212383 pod_ready.go:86] duration metric: took 400.161589ms for pod "kube-proxy-frlpg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:40.868901  212383 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:41.268703  212383 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-609438" is "Ready"
	I1124 14:03:41.268732  212383 pod_ready.go:86] duration metric: took 399.797357ms for pod "kube-scheduler-default-k8s-diff-port-609438" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:41.268746  212383 pod_ready.go:40] duration metric: took 1.605732693s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:03:41.331086  212383 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 14:03:41.336425  212383 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-609438" cluster and "default" namespace by default
	W1124 14:03:41.279143  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:43.774833  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:45.775431  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	W1124 14:03:48.275442  213570 node_ready.go:57] node "embed-certs-593634" has "Ready":"False" status (will retry)
	I1124 14:03:48.774783  213570 node_ready.go:49] node "embed-certs-593634" is "Ready"
	I1124 14:03:48.774815  213570 node_ready.go:38] duration metric: took 42.00333297s for node "embed-certs-593634" to be "Ready" ...
	I1124 14:03:48.774830  213570 api_server.go:52] waiting for apiserver process to appear ...
	I1124 14:03:48.774888  213570 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:03:48.787878  213570 api_server.go:72] duration metric: took 42.923556551s to wait for apiserver process to appear ...
	I1124 14:03:48.787947  213570 api_server.go:88] waiting for apiserver healthz status ...
	I1124 14:03:48.787968  213570 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 14:03:48.796278  213570 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1124 14:03:48.797266  213570 api_server.go:141] control plane version: v1.34.1
	I1124 14:03:48.797292  213570 api_server.go:131] duration metric: took 9.336207ms to wait for apiserver health ...
	I1124 14:03:48.797301  213570 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 14:03:48.800410  213570 system_pods.go:59] 8 kube-system pods found
	I1124 14:03:48.800444  213570 system_pods.go:61] "coredns-66bc5c9577-jjgxr" [9d72d6f6-d1ef-4fcd-9e24-be088e8a5e66] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:03:48.800451  213570 system_pods.go:61] "etcd-embed-certs-593634" [1ad343da-778d-475d-a5ce-fc08e11f693c] Running
	I1124 14:03:48.800456  213570 system_pods.go:61] "kindnet-2xhmk" [a716bd95-8847-4c78-a39c-0234825c66fb] Running
	I1124 14:03:48.800460  213570 system_pods.go:61] "kube-apiserver-embed-certs-593634" [2a958fa8-640e-4d6e-80a4-4cb5abb541bf] Running
	I1124 14:03:48.800464  213570 system_pods.go:61] "kube-controller-manager-embed-certs-593634" [5897c242-4f69-4740-bc24-712bc8bdb2f6] Running
	I1124 14:03:48.800468  213570 system_pods.go:61] "kube-proxy-t2c22" [62324907-3da3-4c2c-887d-798d8375da05] Running
	I1124 14:03:48.800472  213570 system_pods.go:61] "kube-scheduler-embed-certs-593634" [b881f394-008a-4da5-87fe-94a9d922e12c] Running
	I1124 14:03:48.800477  213570 system_pods.go:61] "storage-provisioner" [45b3891f-97a3-4dcb-bafa-b1400a3b4480] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:03:48.800489  213570 system_pods.go:74] duration metric: took 3.183028ms to wait for pod list to return data ...
	I1124 14:03:48.800497  213570 default_sa.go:34] waiting for default service account to be created ...
	I1124 14:03:48.803083  213570 default_sa.go:45] found service account: "default"
	I1124 14:03:48.803109  213570 default_sa.go:55] duration metric: took 2.606184ms for default service account to be created ...
	I1124 14:03:48.803119  213570 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 14:03:48.806286  213570 system_pods.go:86] 8 kube-system pods found
	I1124 14:03:48.806321  213570 system_pods.go:89] "coredns-66bc5c9577-jjgxr" [9d72d6f6-d1ef-4fcd-9e24-be088e8a5e66] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:03:48.806328  213570 system_pods.go:89] "etcd-embed-certs-593634" [1ad343da-778d-475d-a5ce-fc08e11f693c] Running
	I1124 14:03:48.806334  213570 system_pods.go:89] "kindnet-2xhmk" [a716bd95-8847-4c78-a39c-0234825c66fb] Running
	I1124 14:03:48.806365  213570 system_pods.go:89] "kube-apiserver-embed-certs-593634" [2a958fa8-640e-4d6e-80a4-4cb5abb541bf] Running
	I1124 14:03:48.806377  213570 system_pods.go:89] "kube-controller-manager-embed-certs-593634" [5897c242-4f69-4740-bc24-712bc8bdb2f6] Running
	I1124 14:03:48.806381  213570 system_pods.go:89] "kube-proxy-t2c22" [62324907-3da3-4c2c-887d-798d8375da05] Running
	I1124 14:03:48.806385  213570 system_pods.go:89] "kube-scheduler-embed-certs-593634" [b881f394-008a-4da5-87fe-94a9d922e12c] Running
	I1124 14:03:48.806395  213570 system_pods.go:89] "storage-provisioner" [45b3891f-97a3-4dcb-bafa-b1400a3b4480] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:03:48.806421  213570 retry.go:31] will retry after 312.175321ms: missing components: kube-dns
	I1124 14:03:49.124170  213570 system_pods.go:86] 8 kube-system pods found
	I1124 14:03:49.124261  213570 system_pods.go:89] "coredns-66bc5c9577-jjgxr" [9d72d6f6-d1ef-4fcd-9e24-be088e8a5e66] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:03:49.124283  213570 system_pods.go:89] "etcd-embed-certs-593634" [1ad343da-778d-475d-a5ce-fc08e11f693c] Running
	I1124 14:03:49.124327  213570 system_pods.go:89] "kindnet-2xhmk" [a716bd95-8847-4c78-a39c-0234825c66fb] Running
	I1124 14:03:49.124354  213570 system_pods.go:89] "kube-apiserver-embed-certs-593634" [2a958fa8-640e-4d6e-80a4-4cb5abb541bf] Running
	I1124 14:03:49.124376  213570 system_pods.go:89] "kube-controller-manager-embed-certs-593634" [5897c242-4f69-4740-bc24-712bc8bdb2f6] Running
	I1124 14:03:49.124412  213570 system_pods.go:89] "kube-proxy-t2c22" [62324907-3da3-4c2c-887d-798d8375da05] Running
	I1124 14:03:49.124439  213570 system_pods.go:89] "kube-scheduler-embed-certs-593634" [b881f394-008a-4da5-87fe-94a9d922e12c] Running
	I1124 14:03:49.124462  213570 system_pods.go:89] "storage-provisioner" [45b3891f-97a3-4dcb-bafa-b1400a3b4480] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:03:49.124508  213570 retry.go:31] will retry after 274.806291ms: missing components: kube-dns
	I1124 14:03:49.404719  213570 system_pods.go:86] 8 kube-system pods found
	I1124 14:03:49.404754  213570 system_pods.go:89] "coredns-66bc5c9577-jjgxr" [9d72d6f6-d1ef-4fcd-9e24-be088e8a5e66] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:03:49.404761  213570 system_pods.go:89] "etcd-embed-certs-593634" [1ad343da-778d-475d-a5ce-fc08e11f693c] Running
	I1124 14:03:49.404768  213570 system_pods.go:89] "kindnet-2xhmk" [a716bd95-8847-4c78-a39c-0234825c66fb] Running
	I1124 14:03:49.404772  213570 system_pods.go:89] "kube-apiserver-embed-certs-593634" [2a958fa8-640e-4d6e-80a4-4cb5abb541bf] Running
	I1124 14:03:49.404776  213570 system_pods.go:89] "kube-controller-manager-embed-certs-593634" [5897c242-4f69-4740-bc24-712bc8bdb2f6] Running
	I1124 14:03:49.404780  213570 system_pods.go:89] "kube-proxy-t2c22" [62324907-3da3-4c2c-887d-798d8375da05] Running
	I1124 14:03:49.404784  213570 system_pods.go:89] "kube-scheduler-embed-certs-593634" [b881f394-008a-4da5-87fe-94a9d922e12c] Running
	I1124 14:03:49.404789  213570 system_pods.go:89] "storage-provisioner" [45b3891f-97a3-4dcb-bafa-b1400a3b4480] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:03:49.404803  213570 retry.go:31] will retry after 483.554421ms: missing components: kube-dns
	I1124 14:03:49.894105  213570 system_pods.go:86] 8 kube-system pods found
	I1124 14:03:49.894135  213570 system_pods.go:89] "coredns-66bc5c9577-jjgxr" [9d72d6f6-d1ef-4fcd-9e24-be088e8a5e66] Running
	I1124 14:03:49.894142  213570 system_pods.go:89] "etcd-embed-certs-593634" [1ad343da-778d-475d-a5ce-fc08e11f693c] Running
	I1124 14:03:49.894146  213570 system_pods.go:89] "kindnet-2xhmk" [a716bd95-8847-4c78-a39c-0234825c66fb] Running
	I1124 14:03:49.894151  213570 system_pods.go:89] "kube-apiserver-embed-certs-593634" [2a958fa8-640e-4d6e-80a4-4cb5abb541bf] Running
	I1124 14:03:49.894156  213570 system_pods.go:89] "kube-controller-manager-embed-certs-593634" [5897c242-4f69-4740-bc24-712bc8bdb2f6] Running
	I1124 14:03:49.894161  213570 system_pods.go:89] "kube-proxy-t2c22" [62324907-3da3-4c2c-887d-798d8375da05] Running
	I1124 14:03:49.894165  213570 system_pods.go:89] "kube-scheduler-embed-certs-593634" [b881f394-008a-4da5-87fe-94a9d922e12c] Running
	I1124 14:03:49.894169  213570 system_pods.go:89] "storage-provisioner" [45b3891f-97a3-4dcb-bafa-b1400a3b4480] Running
	I1124 14:03:49.894178  213570 system_pods.go:126] duration metric: took 1.091052703s to wait for k8s-apps to be running ...
	I1124 14:03:49.894185  213570 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 14:03:49.894238  213570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:03:49.917451  213570 system_svc.go:56] duration metric: took 23.256451ms WaitForService to wait for kubelet
	I1124 14:03:49.917492  213570 kubeadm.go:587] duration metric: took 44.053162457s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:03:49.917516  213570 node_conditions.go:102] verifying NodePressure condition ...
	I1124 14:03:49.923758  213570 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 14:03:49.923792  213570 node_conditions.go:123] node cpu capacity is 2
	I1124 14:03:49.923807  213570 node_conditions.go:105] duration metric: took 6.285283ms to run NodePressure ...
	I1124 14:03:49.923820  213570 start.go:242] waiting for startup goroutines ...
	I1124 14:03:49.923828  213570 start.go:247] waiting for cluster config update ...
	I1124 14:03:49.923839  213570 start.go:256] writing updated cluster config ...
	I1124 14:03:49.924206  213570 ssh_runner.go:195] Run: rm -f paused
	I1124 14:03:49.927626  213570 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:03:49.931893  213570 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jjgxr" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:49.942828  213570 pod_ready.go:94] pod "coredns-66bc5c9577-jjgxr" is "Ready"
	I1124 14:03:49.942856  213570 pod_ready.go:86] duration metric: took 10.828769ms for pod "coredns-66bc5c9577-jjgxr" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:49.945912  213570 pod_ready.go:83] waiting for pod "etcd-embed-certs-593634" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:49.951340  213570 pod_ready.go:94] pod "etcd-embed-certs-593634" is "Ready"
	I1124 14:03:49.951371  213570 pod_ready.go:86] duration metric: took 5.432769ms for pod "etcd-embed-certs-593634" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:49.955119  213570 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-593634" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:49.962767  213570 pod_ready.go:94] pod "kube-apiserver-embed-certs-593634" is "Ready"
	I1124 14:03:49.962795  213570 pod_ready.go:86] duration metric: took 7.64808ms for pod "kube-apiserver-embed-certs-593634" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:49.966857  213570 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-593634" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:50.332804  213570 pod_ready.go:94] pod "kube-controller-manager-embed-certs-593634" is "Ready"
	I1124 14:03:50.332831  213570 pod_ready.go:86] duration metric: took 365.944063ms for pod "kube-controller-manager-embed-certs-593634" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:50.533022  213570 pod_ready.go:83] waiting for pod "kube-proxy-t2c22" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:50.932652  213570 pod_ready.go:94] pod "kube-proxy-t2c22" is "Ready"
	I1124 14:03:50.932687  213570 pod_ready.go:86] duration metric: took 399.640527ms for pod "kube-proxy-t2c22" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:51.133145  213570 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-593634" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:51.532686  213570 pod_ready.go:94] pod "kube-scheduler-embed-certs-593634" is "Ready"
	I1124 14:03:51.532723  213570 pod_ready.go:86] duration metric: took 399.546574ms for pod "kube-scheduler-embed-certs-593634" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:03:51.532738  213570 pod_ready.go:40] duration metric: took 1.605063201s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:03:51.763100  213570 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 14:03:51.766630  213570 out.go:179] * Done! kubectl is now configured to use "embed-certs-593634" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	f82ca073066cf       1611cd07b61d5       10 seconds ago       Running             busybox                   0                   d2bab1c2ee203       busybox                                      default
	8f3613b1af9f5       138784d87c9c5       16 seconds ago       Running             coredns                   0                   56eb5cfd1d547       coredns-66bc5c9577-jjgxr                     kube-system
	422cf5815a208       ba04bb24b9575       16 seconds ago       Running             storage-provisioner       0                   a4f25514c1964       storage-provisioner                          kube-system
	488f43af45940       05baa95f5142d       57 seconds ago       Running             kube-proxy                0                   b29bd178f0237       kube-proxy-t2c22                             kube-system
	d8d33a8f36018       b1a8c6f707935       57 seconds ago       Running             kindnet-cni               0                   134949bcd76c3       kindnet-2xhmk                                kube-system
	3a60d9be30d61       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   4a20d1b83a9ae       kube-apiserver-embed-certs-593634            kube-system
	16743d0401e11       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   3e0e7d4cacfb7       kube-controller-manager-embed-certs-593634   kube-system
	d86785ce1ba19       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   742bd542951dc       kube-scheduler-embed-certs-593634            kube-system
	ba70ac31cf979       a1894772a478e       About a minute ago   Running             etcd                      0                   66b61302af36b       etcd-embed-certs-593634                      kube-system
	
	
	==> containerd <==
	Nov 24 14:03:49 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:49.178643565Z" level=info msg="CreateContainer within sandbox \"a4f25514c1964a4bad392ff80b25d804ec1e02345ceccbf1862a6d0a1fd8dfd7\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"422cf5815a208c4f42d49c97bb60a4d5a737aa0fd4371ddaf2bbf0da8af91cea\""
	Nov 24 14:03:49 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:49.181690017Z" level=info msg="StartContainer for \"422cf5815a208c4f42d49c97bb60a4d5a737aa0fd4371ddaf2bbf0da8af91cea\""
	Nov 24 14:03:49 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:49.184621292Z" level=info msg="connecting to shim 422cf5815a208c4f42d49c97bb60a4d5a737aa0fd4371ddaf2bbf0da8af91cea" address="unix:///run/containerd/s/6fd512b12bd919a3d55db088ea8349c84e932c10e41065b5f9aa0777efc07cb8" protocol=ttrpc version=3
	Nov 24 14:03:49 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:49.186746157Z" level=info msg="Container 8f3613b1af9f58510f2488f3930203a9b7f874fe0d8361da5a4c7182aeab5ee3: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 14:03:49 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:49.203523107Z" level=info msg="CreateContainer within sandbox \"56eb5cfd1d5479dc5d3b4e73c73fa94c5cf5725e179c449e93ccdf1da24fb69b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8f3613b1af9f58510f2488f3930203a9b7f874fe0d8361da5a4c7182aeab5ee3\""
	Nov 24 14:03:49 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:49.204726264Z" level=info msg="StartContainer for \"8f3613b1af9f58510f2488f3930203a9b7f874fe0d8361da5a4c7182aeab5ee3\""
	Nov 24 14:03:49 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:49.205629757Z" level=info msg="connecting to shim 8f3613b1af9f58510f2488f3930203a9b7f874fe0d8361da5a4c7182aeab5ee3" address="unix:///run/containerd/s/b7185fa15757b5512445d216ac38aca4a90e2a2db11f69c047d62aeda287db85" protocol=ttrpc version=3
	Nov 24 14:03:49 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:49.268726088Z" level=info msg="StartContainer for \"422cf5815a208c4f42d49c97bb60a4d5a737aa0fd4371ddaf2bbf0da8af91cea\" returns successfully"
	Nov 24 14:03:49 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:49.300301905Z" level=info msg="StartContainer for \"8f3613b1af9f58510f2488f3930203a9b7f874fe0d8361da5a4c7182aeab5ee3\" returns successfully"
	Nov 24 14:03:52 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:52.442191903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:f8c75830-451a-4be9-beb5-1131f44fca93,Namespace:default,Attempt:0,}"
	Nov 24 14:03:52 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:52.496274712Z" level=info msg="connecting to shim d2bab1c2ee20342f2bf5b4dd6f7e900c0fc64ebf0d26b010c4b3a6d507b1de6c" address="unix:///run/containerd/s/cae2fa33dbafd3c456ea071f9682925cc0c88b0756f7fd2c1865374e7138124c" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 14:03:52 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:52.616560668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:f8c75830-451a-4be9-beb5-1131f44fca93,Namespace:default,Attempt:0,} returns sandbox id \"d2bab1c2ee20342f2bf5b4dd6f7e900c0fc64ebf0d26b010c4b3a6d507b1de6c\""
	Nov 24 14:03:52 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:52.628856409Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 14:03:54 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:54.841920297Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 14:03:54 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:54.846328406Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937189"
	Nov 24 14:03:54 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:54.846432932Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 14:03:54 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:54.849370402Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 14:03:54 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:54.850083419Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.220987156s"
	Nov 24 14:03:54 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:54.850130411Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 24 14:03:54 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:54.859615952Z" level=info msg="CreateContainer within sandbox \"d2bab1c2ee20342f2bf5b4dd6f7e900c0fc64ebf0d26b010c4b3a6d507b1de6c\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 14:03:54 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:54.875644183Z" level=info msg="Container f82ca073066cf9939535c6279e3af5e38acddcb484054bcce23ad7914656ebc1: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 14:03:54 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:54.888566261Z" level=info msg="CreateContainer within sandbox \"d2bab1c2ee20342f2bf5b4dd6f7e900c0fc64ebf0d26b010c4b3a6d507b1de6c\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"f82ca073066cf9939535c6279e3af5e38acddcb484054bcce23ad7914656ebc1\""
	Nov 24 14:03:54 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:54.891618185Z" level=info msg="StartContainer for \"f82ca073066cf9939535c6279e3af5e38acddcb484054bcce23ad7914656ebc1\""
	Nov 24 14:03:54 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:54.892801141Z" level=info msg="connecting to shim f82ca073066cf9939535c6279e3af5e38acddcb484054bcce23ad7914656ebc1" address="unix:///run/containerd/s/cae2fa33dbafd3c456ea071f9682925cc0c88b0756f7fd2c1865374e7138124c" protocol=ttrpc version=3
	Nov 24 14:03:54 embed-certs-593634 containerd[756]: time="2025-11-24T14:03:54.974262276Z" level=info msg="StartContainer for \"f82ca073066cf9939535c6279e3af5e38acddcb484054bcce23ad7914656ebc1\" returns successfully"
	
	
	==> coredns [8f3613b1af9f58510f2488f3930203a9b7f874fe0d8361da5a4c7182aeab5ee3] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39972 - 47583 "HINFO IN 121077021184602861.149659788633537211. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.024187178s
	
	
	==> describe nodes <==
	Name:               embed-certs-593634
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-593634
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=embed-certs-593634
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T14_03_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 14:02:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-593634
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:04:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:04:02 +0000   Mon, 24 Nov 2025 14:02:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:04:02 +0000   Mon, 24 Nov 2025 14:02:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:04:02 +0000   Mon, 24 Nov 2025 14:02:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 14:04:02 +0000   Mon, 24 Nov 2025 14:03:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-593634
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                e4ef0f99-1a9a-4cde-9064-423d8b90181c
	  Boot ID:                    dd480c26-e101-4930-b98c-54c06b430fdc
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-66bc5c9577-jjgxr                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     59s
	  kube-system                 etcd-embed-certs-593634                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         66s
	  kube-system                 kindnet-2xhmk                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      60s
	  kube-system                 kube-apiserver-embed-certs-593634             250m (12%)    0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kube-controller-manager-embed-certs-593634    200m (10%)    0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kube-proxy-t2c22                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-scheduler-embed-certs-593634             100m (5%)     0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 56s                kube-proxy       
	  Warning  CgroupV1                 75s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  75s (x8 over 75s)  kubelet          Node embed-certs-593634 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    75s (x8 over 75s)  kubelet          Node embed-certs-593634 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     75s (x7 over 75s)  kubelet          Node embed-certs-593634 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  75s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 65s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 65s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  65s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  65s                kubelet          Node embed-certs-593634 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    65s                kubelet          Node embed-certs-593634 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     65s                kubelet          Node embed-certs-593634 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           61s                node-controller  Node embed-certs-593634 event: Registered Node embed-certs-593634 in Controller
	  Normal   NodeReady                17s                kubelet          Node embed-certs-593634 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 12:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014697] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497291] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033884] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.804993] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.476130] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [ba70ac31cf979f3847171923cc96cefda27e391c3648e7c5dc513e3347116c24] <==
	{"level":"warn","ts":"2025-11-24T14:02:54.950803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:54.971534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.026006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.083106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.108992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.125922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.145993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.169733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.190603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.231586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.231938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.253261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.292628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.311727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.327796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.347182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.368705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.438447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.455662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.547376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.552263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.568140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.595996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.615225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:02:55.737669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38362","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:04:05 up  1:46,  0 user,  load average: 2.57, 3.34, 3.02
	Linux embed-certs-593634 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d8d33a8f36018747afa88137af5d6a8191a723a5d7f8346b8bd229e79e9811be] <==
	I1124 14:03:08.365479       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:03:08.366449       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 14:03:08.366825       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:03:08.366897       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:03:08.366914       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:03:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:03:08.569422       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:03:08.569508       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:03:08.569540       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:03:08.571989       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 14:03:38.569149       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 14:03:38.571353       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 14:03:38.572368       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 14:03:38.572375       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1124 14:03:40.070308       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 14:03:40.070421       1 metrics.go:72] Registering metrics
	I1124 14:03:40.070581       1 controller.go:711] "Syncing nftables rules"
	I1124 14:03:48.575778       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 14:03:48.575981       1 main.go:301] handling current node
	I1124 14:03:58.569104       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 14:03:58.569141       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3a60d9be30d6103a95f401caf2bb929b5c49ebfce9a7b132430f55718822e815] <==
	I1124 14:02:57.073133       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 14:02:57.073335       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 14:02:57.091800       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 14:02:57.107762       1 controller.go:667] quota admission added evaluator for: namespaces
	E1124 14:02:57.200702       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E1124 14:02:57.200770       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1124 14:02:57.411896       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 14:02:57.773921       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 14:02:57.794161       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 14:02:57.794193       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:02:58.858899       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 14:02:58.963205       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 14:02:59.174464       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 14:02:59.205211       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1124 14:02:59.206560       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 14:02:59.212549       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:02:59.903589       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 14:03:00.175597       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 14:03:00.248650       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 14:03:00.422202       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 14:03:05.598021       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 14:03:05.958977       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 14:03:06.109830       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:03:06.148704       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1124 14:04:02.256587       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:37348: use of closed network connection
	
	
	==> kube-controller-manager [16743d0401e1150054e0ee1e6961814398310e73894c86d0327344c25bf7d7b8] <==
	I1124 14:03:04.943768       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 14:03:04.945223       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:03:04.946513       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 14:03:04.946645       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 14:03:04.947760       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 14:03:04.947887       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 14:03:04.949254       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 14:03:04.949649       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 14:03:04.952748       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 14:03:04.954662       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 14:03:04.957152       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 14:03:04.961745       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 14:03:04.972013       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 14:03:04.981489       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 14:03:04.991118       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 14:03:04.991248       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 14:03:04.991550       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 14:03:04.993610       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 14:03:04.993742       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 14:03:04.993864       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 14:03:04.995277       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 14:03:04.995391       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 14:03:04.995487       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 14:03:04.997279       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 14:03:49.947871       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [488f43af45940b994171e0ae482dcd33c6d809a0fc0db195d899b287b06a5941] <==
	I1124 14:03:08.502899       1 server_linux.go:53] "Using iptables proxy"
	I1124 14:03:08.619457       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 14:03:08.720016       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 14:03:08.720258       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1124 14:03:08.720406       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 14:03:08.745047       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:03:08.745108       1 server_linux.go:132] "Using iptables Proxier"
	I1124 14:03:08.750446       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 14:03:08.750957       1 server.go:527] "Version info" version="v1.34.1"
	I1124 14:03:08.750980       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:03:08.752867       1 config.go:200] "Starting service config controller"
	I1124 14:03:08.752900       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 14:03:08.752924       1 config.go:106] "Starting endpoint slice config controller"
	I1124 14:03:08.752929       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 14:03:08.752954       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 14:03:08.753237       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 14:03:08.757937       1 config.go:309] "Starting node config controller"
	I1124 14:03:08.758160       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 14:03:08.758239       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 14:03:08.853992       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 14:03:08.854011       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 14:03:08.854049       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d86785ce1ba191be091bc75c25b6729a402901526d6d2888340f1cd1d00aa1fb] <==
	I1124 14:02:57.423648       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 14:02:57.423904       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1124 14:02:57.473856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 14:02:57.475095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1124 14:02:57.475230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 14:02:57.475291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 14:02:57.484569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 14:02:57.484898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 14:02:57.485038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 14:02:57.485110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 14:02:57.485159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 14:02:57.485237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 14:02:57.485305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 14:02:57.485364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 14:02:57.485401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 14:02:57.485439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 14:02:57.485479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 14:02:57.485524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 14:02:57.485633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 14:02:57.486875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 14:02:57.486976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 14:02:58.328608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1124 14:02:58.421849       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 14:02:58.450372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1124 14:03:00.622867       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 14:03:06 embed-certs-593634 kubelet[1458]: I1124 14:03:06.184089    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62324907-3da3-4c2c-887d-798d8375da05-lib-modules\") pod \"kube-proxy-t2c22\" (UID: \"62324907-3da3-4c2c-887d-798d8375da05\") " pod="kube-system/kube-proxy-t2c22"
	Nov 24 14:03:06 embed-certs-593634 kubelet[1458]: I1124 14:03:06.184150    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhtqp\" (UniqueName: \"kubernetes.io/projected/62324907-3da3-4c2c-887d-798d8375da05-kube-api-access-bhtqp\") pod \"kube-proxy-t2c22\" (UID: \"62324907-3da3-4c2c-887d-798d8375da05\") " pod="kube-system/kube-proxy-t2c22"
	Nov 24 14:03:06 embed-certs-593634 kubelet[1458]: I1124 14:03:06.184171    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a716bd95-8847-4c78-a39c-0234825c66fb-xtables-lock\") pod \"kindnet-2xhmk\" (UID: \"a716bd95-8847-4c78-a39c-0234825c66fb\") " pod="kube-system/kindnet-2xhmk"
	Nov 24 14:03:06 embed-certs-593634 kubelet[1458]: I1124 14:03:06.184214    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw657\" (UniqueName: \"kubernetes.io/projected/a716bd95-8847-4c78-a39c-0234825c66fb-kube-api-access-kw657\") pod \"kindnet-2xhmk\" (UID: \"a716bd95-8847-4c78-a39c-0234825c66fb\") " pod="kube-system/kindnet-2xhmk"
	Nov 24 14:03:06 embed-certs-593634 kubelet[1458]: I1124 14:03:06.184234    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62324907-3da3-4c2c-887d-798d8375da05-xtables-lock\") pod \"kube-proxy-t2c22\" (UID: \"62324907-3da3-4c2c-887d-798d8375da05\") " pod="kube-system/kube-proxy-t2c22"
	Nov 24 14:03:06 embed-certs-593634 kubelet[1458]: I1124 14:03:06.184253    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a716bd95-8847-4c78-a39c-0234825c66fb-lib-modules\") pod \"kindnet-2xhmk\" (UID: \"a716bd95-8847-4c78-a39c-0234825c66fb\") " pod="kube-system/kindnet-2xhmk"
	Nov 24 14:03:06 embed-certs-593634 kubelet[1458]: I1124 14:03:06.184273    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/62324907-3da3-4c2c-887d-798d8375da05-kube-proxy\") pod \"kube-proxy-t2c22\" (UID: \"62324907-3da3-4c2c-887d-798d8375da05\") " pod="kube-system/kube-proxy-t2c22"
	Nov 24 14:03:06 embed-certs-593634 kubelet[1458]: I1124 14:03:06.184290    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a716bd95-8847-4c78-a39c-0234825c66fb-cni-cfg\") pod \"kindnet-2xhmk\" (UID: \"a716bd95-8847-4c78-a39c-0234825c66fb\") " pod="kube-system/kindnet-2xhmk"
	Nov 24 14:03:07 embed-certs-593634 kubelet[1458]: E1124 14:03:07.319345    1458 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 24 14:03:07 embed-certs-593634 kubelet[1458]: E1124 14:03:07.319394    1458 projected.go:196] Error preparing data for projected volume kube-api-access-kw657 for pod kube-system/kindnet-2xhmk: failed to sync configmap cache: timed out waiting for the condition
	Nov 24 14:03:07 embed-certs-593634 kubelet[1458]: E1124 14:03:07.319494    1458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a716bd95-8847-4c78-a39c-0234825c66fb-kube-api-access-kw657 podName:a716bd95-8847-4c78-a39c-0234825c66fb nodeName:}" failed. No retries permitted until 2025-11-24 14:03:07.819467027 +0000 UTC m=+7.824199089 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kw657" (UniqueName: "kubernetes.io/projected/a716bd95-8847-4c78-a39c-0234825c66fb-kube-api-access-kw657") pod "kindnet-2xhmk" (UID: "a716bd95-8847-4c78-a39c-0234825c66fb") : failed to sync configmap cache: timed out waiting for the condition
	Nov 24 14:03:07 embed-certs-593634 kubelet[1458]: E1124 14:03:07.335416    1458 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 24 14:03:07 embed-certs-593634 kubelet[1458]: E1124 14:03:07.335468    1458 projected.go:196] Error preparing data for projected volume kube-api-access-bhtqp for pod kube-system/kube-proxy-t2c22: failed to sync configmap cache: timed out waiting for the condition
	Nov 24 14:03:07 embed-certs-593634 kubelet[1458]: E1124 14:03:07.335549    1458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62324907-3da3-4c2c-887d-798d8375da05-kube-api-access-bhtqp podName:62324907-3da3-4c2c-887d-798d8375da05 nodeName:}" failed. No retries permitted until 2025-11-24 14:03:07.835529037 +0000 UTC m=+7.840261115 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bhtqp" (UniqueName: "kubernetes.io/projected/62324907-3da3-4c2c-887d-798d8375da05-kube-api-access-bhtqp") pod "kube-proxy-t2c22" (UID: "62324907-3da3-4c2c-887d-798d8375da05") : failed to sync configmap cache: timed out waiting for the condition
	Nov 24 14:03:07 embed-certs-593634 kubelet[1458]: I1124 14:03:07.898918    1458 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 24 14:03:08 embed-certs-593634 kubelet[1458]: I1124 14:03:08.602912    1458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t2c22" podStartSLOduration=3.602894075 podStartE2EDuration="3.602894075s" podCreationTimestamp="2025-11-24 14:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:03:08.587884518 +0000 UTC m=+8.592616588" watchObservedRunningTime="2025-11-24 14:03:08.602894075 +0000 UTC m=+8.607626137"
	Nov 24 14:03:08 embed-certs-593634 kubelet[1458]: I1124 14:03:08.603557    1458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-2xhmk" podStartSLOduration=3.6035446589999998 podStartE2EDuration="3.603544659s" podCreationTimestamp="2025-11-24 14:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:03:08.602680009 +0000 UTC m=+8.607412079" watchObservedRunningTime="2025-11-24 14:03:08.603544659 +0000 UTC m=+8.608276729"
	Nov 24 14:03:48 embed-certs-593634 kubelet[1458]: I1124 14:03:48.652494    1458 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 14:03:48 embed-certs-593634 kubelet[1458]: I1124 14:03:48.781306    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/45b3891f-97a3-4dcb-bafa-b1400a3b4480-tmp\") pod \"storage-provisioner\" (UID: \"45b3891f-97a3-4dcb-bafa-b1400a3b4480\") " pod="kube-system/storage-provisioner"
	Nov 24 14:03:48 embed-certs-593634 kubelet[1458]: I1124 14:03:48.781355    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzlqp\" (UniqueName: \"kubernetes.io/projected/45b3891f-97a3-4dcb-bafa-b1400a3b4480-kube-api-access-fzlqp\") pod \"storage-provisioner\" (UID: \"45b3891f-97a3-4dcb-bafa-b1400a3b4480\") " pod="kube-system/storage-provisioner"
	Nov 24 14:03:48 embed-certs-593634 kubelet[1458]: I1124 14:03:48.882624    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d72d6f6-d1ef-4fcd-9e24-be088e8a5e66-config-volume\") pod \"coredns-66bc5c9577-jjgxr\" (UID: \"9d72d6f6-d1ef-4fcd-9e24-be088e8a5e66\") " pod="kube-system/coredns-66bc5c9577-jjgxr"
	Nov 24 14:03:48 embed-certs-593634 kubelet[1458]: I1124 14:03:48.882851    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zcgd\" (UniqueName: \"kubernetes.io/projected/9d72d6f6-d1ef-4fcd-9e24-be088e8a5e66-kube-api-access-4zcgd\") pod \"coredns-66bc5c9577-jjgxr\" (UID: \"9d72d6f6-d1ef-4fcd-9e24-be088e8a5e66\") " pod="kube-system/coredns-66bc5c9577-jjgxr"
	Nov 24 14:03:49 embed-certs-593634 kubelet[1458]: I1124 14:03:49.732052    1458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jjgxr" podStartSLOduration=43.732031029 podStartE2EDuration="43.732031029s" podCreationTimestamp="2025-11-24 14:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:03:49.702473139 +0000 UTC m=+49.707205217" watchObservedRunningTime="2025-11-24 14:03:49.732031029 +0000 UTC m=+49.736763091"
	Nov 24 14:03:49 embed-certs-593634 kubelet[1458]: I1124 14:03:49.761125    1458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.761104572 podStartE2EDuration="42.761104572s" podCreationTimestamp="2025-11-24 14:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:03:49.733124112 +0000 UTC m=+49.737856182" watchObservedRunningTime="2025-11-24 14:03:49.761104572 +0000 UTC m=+49.765836643"
	Nov 24 14:03:52 embed-certs-593634 kubelet[1458]: I1124 14:03:52.315700    1458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxwvm\" (UniqueName: \"kubernetes.io/projected/f8c75830-451a-4be9-beb5-1131f44fca93-kube-api-access-fxwvm\") pod \"busybox\" (UID: \"f8c75830-451a-4be9-beb5-1131f44fca93\") " pod="default/busybox"
	
	
	==> storage-provisioner [422cf5815a208c4f42d49c97bb60a4d5a737aa0fd4371ddaf2bbf0da8af91cea] <==
	I1124 14:03:49.296395       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 14:03:49.311480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:49.321306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:03:49.321631       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 14:03:49.324019       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-593634_393582fa-c8fe-4cfe-bbf4-56facd09b640!
	I1124 14:03:49.324344       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"30a9fdad-3f98-4373-8429-132f81eb40fd", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-593634_393582fa-c8fe-4cfe-bbf4-56facd09b640 became leader
	W1124 14:03:49.339974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:49.366739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:03:49.424489       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-593634_393582fa-c8fe-4cfe-bbf4-56facd09b640!
	W1124 14:03:51.371354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:51.377432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:53.380386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:53.385451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:55.389054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:55.394702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:57.398135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:57.403152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:59.406676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:03:59.411593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:04:01.415364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:04:01.420521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:04:03.430407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:04:03.439832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:04:05.442919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:04:05.447587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-593634 -n embed-certs-593634
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-593634 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (14.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (15.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-694102 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d1d404ac-cc11-4eb6-ae07-b81ddad14d37] Pending
helpers_test.go:352: "busybox" [d1d404ac-cc11-4eb6-ae07-b81ddad14d37] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d1d404ac-cc11-4eb6-ae07-b81ddad14d37] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00372317s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-694102 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-694102
helpers_test.go:243: (dbg) docker inspect no-preload-694102:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2919e7e2844d1cf44454f2c989bb893b3894a7f26ed969b74c6a6adfa629bed6",
	        "Created": "2025-11-24T14:05:20.101247347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 228306,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T14:05:20.202900361Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/2919e7e2844d1cf44454f2c989bb893b3894a7f26ed969b74c6a6adfa629bed6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2919e7e2844d1cf44454f2c989bb893b3894a7f26ed969b74c6a6adfa629bed6/hostname",
	        "HostsPath": "/var/lib/docker/containers/2919e7e2844d1cf44454f2c989bb893b3894a7f26ed969b74c6a6adfa629bed6/hosts",
	        "LogPath": "/var/lib/docker/containers/2919e7e2844d1cf44454f2c989bb893b3894a7f26ed969b74c6a6adfa629bed6/2919e7e2844d1cf44454f2c989bb893b3894a7f26ed969b74c6a6adfa629bed6-json.log",
	        "Name": "/no-preload-694102",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-694102:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-694102",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2919e7e2844d1cf44454f2c989bb893b3894a7f26ed969b74c6a6adfa629bed6",
	                "LowerDir": "/var/lib/docker/overlay2/2043cead9d249d0fcb074b26ff27da6d74e0f435b45dc87810eac93600e787e5-init/diff:/var/lib/docker/overlay2/f206897dad0d7c6b66379aa7c75402ab98ba158a4fc5aedf84eda3d57da10430/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2043cead9d249d0fcb074b26ff27da6d74e0f435b45dc87810eac93600e787e5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2043cead9d249d0fcb074b26ff27da6d74e0f435b45dc87810eac93600e787e5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2043cead9d249d0fcb074b26ff27da6d74e0f435b45dc87810eac93600e787e5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-694102",
	                "Source": "/var/lib/docker/volumes/no-preload-694102/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-694102",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-694102",
	                "name.minikube.sigs.k8s.io": "no-preload-694102",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fc348d81a08d06cdab27b864057fc0e4e77a5b6bf300294a793dbb2cfa2919b4",
	            "SandboxKey": "/var/run/docker/netns/fc348d81a08d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-694102": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:31:6e:4b:52:27",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "26d62b1d238cec67ff97d85579746f7a43022e393bcf007b8b06c40243c0378a",
	                    "EndpointID": "1f4bf4a4d9d9f8dd47a4ded5f04e54cada8aed3770c32f5ea7b33d19e75717b1",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-694102",
	                        "2919e7e2844d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-694102 -n no-preload-694102
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-694102 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-694102 logs -n 25: (1.615478897s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-609438 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ default-k8s-diff-port-609438 │ jenkins │ v1.37.0 │ 24 Nov 25 14:04 UTC │ 24 Nov 25 14:04 UTC │
	│ start   │ -p default-k8s-diff-port-609438 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-609438 │ jenkins │ v1.37.0 │ 24 Nov 25 14:04 UTC │ 24 Nov 25 14:04 UTC │
	│ addons  │ enable dashboard -p embed-certs-593634 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-593634           │ jenkins │ v1.37.0 │ 24 Nov 25 14:04 UTC │ 24 Nov 25 14:04 UTC │
	│ start   │ -p embed-certs-593634 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-593634           │ jenkins │ v1.37.0 │ 24 Nov 25 14:04 UTC │ 24 Nov 25 14:05 UTC │
	│ image   │ default-k8s-diff-port-609438 image list --format=json                                                                                                                                                                                               │ default-k8s-diff-port-609438 │ jenkins │ v1.37.0 │ 24 Nov 25 14:05 UTC │ 24 Nov 25 14:05 UTC │
	│ pause   │ -p default-k8s-diff-port-609438 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-609438 │ jenkins │ v1.37.0 │ 24 Nov 25 14:05 UTC │ 24 Nov 25 14:05 UTC │
	│ unpause │ -p default-k8s-diff-port-609438 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-609438 │ jenkins │ v1.37.0 │ 24 Nov 25 14:05 UTC │ 24 Nov 25 14:05 UTC │
	│ delete  │ -p default-k8s-diff-port-609438                                                                                                                                                                                                                     │ default-k8s-diff-port-609438 │ jenkins │ v1.37.0 │ 24 Nov 25 14:05 UTC │ 24 Nov 25 14:05 UTC │
	│ delete  │ -p default-k8s-diff-port-609438                                                                                                                                                                                                                     │ default-k8s-diff-port-609438 │ jenkins │ v1.37.0 │ 24 Nov 25 14:05 UTC │ 24 Nov 25 14:05 UTC │
	│ delete  │ -p disable-driver-mounts-073831                                                                                                                                                                                                                     │ disable-driver-mounts-073831 │ jenkins │ v1.37.0 │ 24 Nov 25 14:05 UTC │ 24 Nov 25 14:05 UTC │
	│ start   │ -p no-preload-694102 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-694102            │ jenkins │ v1.37.0 │ 24 Nov 25 14:05 UTC │ 24 Nov 25 14:06 UTC │
	│ image   │ embed-certs-593634 image list --format=json                                                                                                                                                                                                         │ embed-certs-593634           │ jenkins │ v1.37.0 │ 24 Nov 25 14:05 UTC │ 24 Nov 25 14:05 UTC │
	│ pause   │ -p embed-certs-593634 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-593634           │ jenkins │ v1.37.0 │ 24 Nov 25 14:05 UTC │ 24 Nov 25 14:05 UTC │
	│ unpause │ -p embed-certs-593634 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-593634           │ jenkins │ v1.37.0 │ 24 Nov 25 14:05 UTC │ 24 Nov 25 14:05 UTC │
	│ delete  │ -p embed-certs-593634                                                                                                                                                                                                                               │ embed-certs-593634           │ jenkins │ v1.37.0 │ 24 Nov 25 14:05 UTC │ 24 Nov 25 14:05 UTC │
	│ delete  │ -p embed-certs-593634                                                                                                                                                                                                                               │ embed-certs-593634           │ jenkins │ v1.37.0 │ 24 Nov 25 14:05 UTC │ 24 Nov 25 14:05 UTC │
	│ start   │ -p newest-cni-857121 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-857121            │ jenkins │ v1.37.0 │ 24 Nov 25 14:05 UTC │ 24 Nov 25 14:06 UTC │
	│ addons  │ enable metrics-server -p newest-cni-857121 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-857121            │ jenkins │ v1.37.0 │ 24 Nov 25 14:06 UTC │ 24 Nov 25 14:06 UTC │
	│ stop    │ -p newest-cni-857121 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-857121            │ jenkins │ v1.37.0 │ 24 Nov 25 14:06 UTC │ 24 Nov 25 14:06 UTC │
	│ addons  │ enable dashboard -p newest-cni-857121 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-857121            │ jenkins │ v1.37.0 │ 24 Nov 25 14:06 UTC │ 24 Nov 25 14:06 UTC │
	│ start   │ -p newest-cni-857121 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-857121            │ jenkins │ v1.37.0 │ 24 Nov 25 14:06 UTC │ 24 Nov 25 14:06 UTC │
	│ image   │ newest-cni-857121 image list --format=json                                                                                                                                                                                                          │ newest-cni-857121            │ jenkins │ v1.37.0 │ 24 Nov 25 14:06 UTC │ 24 Nov 25 14:06 UTC │
	│ pause   │ -p newest-cni-857121 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-857121            │ jenkins │ v1.37.0 │ 24 Nov 25 14:06 UTC │ 24 Nov 25 14:06 UTC │
	│ unpause │ -p newest-cni-857121 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-857121            │ jenkins │ v1.37.0 │ 24 Nov 25 14:06 UTC │ 24 Nov 25 14:06 UTC │
	│ delete  │ -p newest-cni-857121                                                                                                                                                                                                                                │ newest-cni-857121            │ jenkins │ v1.37.0 │ 24 Nov 25 14:06 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:06:22
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:06:22.748940  235400 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:06:22.749135  235400 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:06:22.749148  235400 out.go:374] Setting ErrFile to fd 2...
	I1124 14:06:22.749154  235400 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:06:22.750031  235400 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2368/.minikube/bin
	I1124 14:06:22.750471  235400 out.go:368] Setting JSON to false
	I1124 14:06:22.751372  235400 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6532,"bootTime":1763986651,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1124 14:06:22.751443  235400 start.go:143] virtualization:  
	I1124 14:06:22.754496  235400 out.go:179] * [newest-cni-857121] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 14:06:22.759248  235400 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:06:22.759425  235400 notify.go:221] Checking for updates...
	I1124 14:06:22.765196  235400 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:06:22.768134  235400 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2368/kubeconfig
	I1124 14:06:22.771058  235400 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2368/.minikube
	I1124 14:06:22.773883  235400 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 14:06:22.776878  235400 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:06:22.780455  235400 config.go:182] Loaded profile config "newest-cni-857121": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 14:06:22.781004  235400 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:06:22.810561  235400 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 14:06:22.810674  235400 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:06:22.888531  235400 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 14:06:22.87883056 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:06:22.888634  235400 docker.go:319] overlay module found
	I1124 14:06:22.893623  235400 out.go:179] * Using the docker driver based on existing profile
	I1124 14:06:22.896558  235400 start.go:309] selected driver: docker
	I1124 14:06:22.896580  235400 start.go:927] validating driver "docker" against &{Name:newest-cni-857121 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-857121 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:06:22.896699  235400 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:06:22.897453  235400 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:06:22.973311  235400 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 14:06:22.950072014 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 14:06:22.973651  235400 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 14:06:22.973685  235400 cni.go:84] Creating CNI manager for ""
	I1124 14:06:22.973742  235400 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 14:06:22.973784  235400 start.go:353] cluster config:
	{Name:newest-cni-857121 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-857121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:06:22.978722  235400 out.go:179] * Starting "newest-cni-857121" primary control-plane node in "newest-cni-857121" cluster
	I1124 14:06:22.981684  235400 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 14:06:22.984655  235400 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 14:06:22.987570  235400 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 14:06:22.987619  235400 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1124 14:06:22.987630  235400 cache.go:65] Caching tarball of preloaded images
	I1124 14:06:22.987671  235400 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 14:06:22.987716  235400 preload.go:238] Found /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1124 14:06:22.987727  235400 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1124 14:06:22.987870  235400 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/newest-cni-857121/config.json ...
	I1124 14:06:23.010324  235400 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 14:06:23.010351  235400 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 14:06:23.010372  235400 cache.go:240] Successfully downloaded all kic artifacts
	I1124 14:06:23.010403  235400 start.go:360] acquireMachinesLock for newest-cni-857121: {Name:mk942cc5f918a29dddee4d00e3503a8d5aa6334a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:06:23.010476  235400 start.go:364] duration metric: took 49.362µs to acquireMachinesLock for "newest-cni-857121"
	I1124 14:06:23.010513  235400 start.go:96] Skipping create...Using existing machine configuration
	I1124 14:06:23.010521  235400 fix.go:54] fixHost starting: 
	I1124 14:06:23.010804  235400 cli_runner.go:164] Run: docker container inspect newest-cni-857121 --format={{.State.Status}}
	I1124 14:06:23.027800  235400 fix.go:112] recreateIfNeeded on newest-cni-857121: state=Stopped err=<nil>
	W1124 14:06:23.027830  235400 fix.go:138] unexpected machine state, will restart: <nil>
	W1124 14:06:20.658654  227999 node_ready.go:57] node "no-preload-694102" has "Ready":"False" status (will retry)
	W1124 14:06:23.157350  227999 node_ready.go:57] node "no-preload-694102" has "Ready":"False" status (will retry)
	I1124 14:06:23.031104  235400 out.go:252] * Restarting existing docker container for "newest-cni-857121" ...
	I1124 14:06:23.031193  235400 cli_runner.go:164] Run: docker start newest-cni-857121
	I1124 14:06:23.294392  235400 cli_runner.go:164] Run: docker container inspect newest-cni-857121 --format={{.State.Status}}
	I1124 14:06:23.318345  235400 kic.go:430] container "newest-cni-857121" state is running.
	I1124 14:06:23.318736  235400 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-857121
	I1124 14:06:23.342150  235400 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/newest-cni-857121/config.json ...
	I1124 14:06:23.342379  235400 machine.go:94] provisionDockerMachine start ...
	I1124 14:06:23.342460  235400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-857121
	I1124 14:06:23.372930  235400 main.go:143] libmachine: Using SSH client type: native
	I1124 14:06:23.373532  235400 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1124 14:06:23.373552  235400 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:06:23.374339  235400 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 14:06:26.535622  235400 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-857121
	
	I1124 14:06:26.535648  235400 ubuntu.go:182] provisioning hostname "newest-cni-857121"
	I1124 14:06:26.535710  235400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-857121
	I1124 14:06:26.554314  235400 main.go:143] libmachine: Using SSH client type: native
	I1124 14:06:26.554632  235400 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1124 14:06:26.554648  235400 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-857121 && echo "newest-cni-857121" | sudo tee /etc/hostname
	I1124 14:06:26.718365  235400 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-857121
	
	I1124 14:06:26.718439  235400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-857121
	I1124 14:06:26.736413  235400 main.go:143] libmachine: Using SSH client type: native
	I1124 14:06:26.736743  235400 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1124 14:06:26.736784  235400 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-857121' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-857121/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-857121' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:06:26.892228  235400 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:06:26.892254  235400 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-2368/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-2368/.minikube}
	I1124 14:06:26.892274  235400 ubuntu.go:190] setting up certificates
	I1124 14:06:26.892326  235400 provision.go:84] configureAuth start
	I1124 14:06:26.892411  235400 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-857121
	I1124 14:06:26.910077  235400 provision.go:143] copyHostCerts
	I1124 14:06:26.910160  235400 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem, removing ...
	I1124 14:06:26.910180  235400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem
	I1124 14:06:26.910259  235400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/ca.pem (1082 bytes)
	I1124 14:06:26.910372  235400 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem, removing ...
	I1124 14:06:26.910382  235400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem
	I1124 14:06:26.910409  235400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/cert.pem (1123 bytes)
	I1124 14:06:26.910466  235400 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem, removing ...
	I1124 14:06:26.910473  235400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem
	I1124 14:06:26.910496  235400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-2368/.minikube/key.pem (1679 bytes)
	I1124 14:06:26.910551  235400 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem org=jenkins.newest-cni-857121 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-857121]
	I1124 14:06:27.185900  235400 provision.go:177] copyRemoteCerts
	I1124 14:06:27.185967  235400 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:06:27.186011  235400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-857121
	I1124 14:06:27.206847  235400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/newest-cni-857121/id_rsa Username:docker}
	I1124 14:06:27.316245  235400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 14:06:27.335009  235400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 14:06:27.354774  235400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 14:06:27.376717  235400 provision.go:87] duration metric: took 484.353213ms to configureAuth
	I1124 14:06:27.376746  235400 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:06:27.377001  235400 config.go:182] Loaded profile config "newest-cni-857121": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 14:06:27.377014  235400 machine.go:97] duration metric: took 4.034620419s to provisionDockerMachine
	I1124 14:06:27.377022  235400 start.go:293] postStartSetup for "newest-cni-857121" (driver="docker")
	I1124 14:06:27.377033  235400 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:06:27.377086  235400 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:06:27.377136  235400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-857121
	I1124 14:06:27.394885  235400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/newest-cni-857121/id_rsa Username:docker}
	I1124 14:06:27.500210  235400 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:06:27.503889  235400 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:06:27.503963  235400 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:06:27.503977  235400 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2368/.minikube/addons for local assets ...
	I1124 14:06:27.504039  235400 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-2368/.minikube/files for local assets ...
	I1124 14:06:27.504121  235400 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem -> 41782.pem in /etc/ssl/certs
	I1124 14:06:27.504243  235400 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:06:27.512542  235400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem --> /etc/ssl/certs/41782.pem (1708 bytes)
	I1124 14:06:27.532797  235400 start.go:296] duration metric: took 155.758645ms for postStartSetup
	I1124 14:06:27.532881  235400 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 14:06:27.532922  235400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-857121
	I1124 14:06:27.552032  235400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/newest-cni-857121/id_rsa Username:docker}
	I1124 14:06:27.658887  235400 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 14:06:27.663870  235400 fix.go:56] duration metric: took 4.653336309s for fixHost
	I1124 14:06:27.663898  235400 start.go:83] releasing machines lock for "newest-cni-857121", held for 4.653402132s
	I1124 14:06:27.663996  235400 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-857121
	I1124 14:06:27.682464  235400 ssh_runner.go:195] Run: cat /version.json
	I1124 14:06:27.682521  235400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-857121
	I1124 14:06:27.682551  235400 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:06:27.682626  235400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-857121
	I1124 14:06:27.700355  235400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/newest-cni-857121/id_rsa Username:docker}
	I1124 14:06:27.720099  235400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/newest-cni-857121/id_rsa Username:docker}
	W1124 14:06:25.157594  227999 node_ready.go:57] node "no-preload-694102" has "Ready":"False" status (will retry)
	W1124 14:06:27.158384  227999 node_ready.go:57] node "no-preload-694102" has "Ready":"False" status (will retry)
	I1124 14:06:28.672301  227999 node_ready.go:49] node "no-preload-694102" is "Ready"
	I1124 14:06:28.672328  227999 node_ready.go:38] duration metric: took 12.517945215s for node "no-preload-694102" to be "Ready" ...
	I1124 14:06:28.672343  227999 api_server.go:52] waiting for apiserver process to appear ...
	I1124 14:06:28.672403  227999 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:06:28.712226  227999 api_server.go:72] duration metric: took 14.67870379s to wait for apiserver process to appear ...
	I1124 14:06:28.712252  227999 api_server.go:88] waiting for apiserver healthz status ...
	I1124 14:06:28.712272  227999 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 14:06:28.740980  227999 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 14:06:28.742073  227999 api_server.go:141] control plane version: v1.34.1
	I1124 14:06:28.742097  227999 api_server.go:131] duration metric: took 29.83842ms to wait for apiserver health ...
	I1124 14:06:28.742106  227999 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 14:06:28.748408  227999 system_pods.go:59] 8 kube-system pods found
	I1124 14:06:28.748496  227999 system_pods.go:61] "coredns-66bc5c9577-mlv2v" [67b42ac3-2efb-4c75-bf32-69997934054f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:06:28.748520  227999 system_pods.go:61] "etcd-no-preload-694102" [ad1ec9e9-8ac6-4935-846d-61da57247663] Running
	I1124 14:06:28.748557  227999 system_pods.go:61] "kindnet-7c59v" [8a5b0f0c-0045-46ca-b1fb-3b810431b3c1] Running
	I1124 14:06:28.748582  227999 system_pods.go:61] "kube-apiserver-no-preload-694102" [bf827f89-f5fe-412c-9dde-002f07d7813f] Running
	I1124 14:06:28.748603  227999 system_pods.go:61] "kube-controller-manager-no-preload-694102" [23d3e702-29bf-47e6-a3a6-02bac428205b] Running
	I1124 14:06:28.748624  227999 system_pods.go:61] "kube-proxy-zfqkk" [25554c4e-42ac-44c3-b789-ec859d575750] Running
	I1124 14:06:28.748657  227999 system_pods.go:61] "kube-scheduler-no-preload-694102" [ca86e2f7-72b9-4ebf-a6b1-9d6d1fcdeb26] Running
	I1124 14:06:28.748679  227999 system_pods.go:61] "storage-provisioner" [c7650495-d936-47cd-8950-19783ba64e6c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:06:28.748704  227999 system_pods.go:74] duration metric: took 6.590478ms to wait for pod list to return data ...
	I1124 14:06:28.748746  227999 default_sa.go:34] waiting for default service account to be created ...
	I1124 14:06:28.769369  227999 default_sa.go:45] found service account: "default"
	I1124 14:06:28.769446  227999 default_sa.go:55] duration metric: took 20.68081ms for default service account to be created ...
	I1124 14:06:28.769492  227999 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 14:06:28.782492  227999 system_pods.go:86] 8 kube-system pods found
	I1124 14:06:28.782579  227999 system_pods.go:89] "coredns-66bc5c9577-mlv2v" [67b42ac3-2efb-4c75-bf32-69997934054f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:06:28.782609  227999 system_pods.go:89] "etcd-no-preload-694102" [ad1ec9e9-8ac6-4935-846d-61da57247663] Running
	I1124 14:06:28.782649  227999 system_pods.go:89] "kindnet-7c59v" [8a5b0f0c-0045-46ca-b1fb-3b810431b3c1] Running
	I1124 14:06:28.782673  227999 system_pods.go:89] "kube-apiserver-no-preload-694102" [bf827f89-f5fe-412c-9dde-002f07d7813f] Running
	I1124 14:06:28.782696  227999 system_pods.go:89] "kube-controller-manager-no-preload-694102" [23d3e702-29bf-47e6-a3a6-02bac428205b] Running
	I1124 14:06:28.782733  227999 system_pods.go:89] "kube-proxy-zfqkk" [25554c4e-42ac-44c3-b789-ec859d575750] Running
	I1124 14:06:28.782758  227999 system_pods.go:89] "kube-scheduler-no-preload-694102" [ca86e2f7-72b9-4ebf-a6b1-9d6d1fcdeb26] Running
	I1124 14:06:28.782782  227999 system_pods.go:89] "storage-provisioner" [c7650495-d936-47cd-8950-19783ba64e6c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:06:28.782830  227999 retry.go:31] will retry after 224.647182ms: missing components: kube-dns
	I1124 14:06:27.811960  235400 ssh_runner.go:195] Run: systemctl --version
	I1124 14:06:28.174234  235400 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:06:28.179381  235400 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:06:28.179450  235400 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:06:28.189038  235400 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 14:06:28.189063  235400 start.go:496] detecting cgroup driver to use...
	I1124 14:06:28.189113  235400 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 14:06:28.189194  235400 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 14:06:28.208330  235400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 14:06:28.222231  235400 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:06:28.222315  235400 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:06:28.238915  235400 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:06:28.252479  235400 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:06:28.379881  235400 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:06:28.491812  235400 docker.go:234] disabling docker service ...
	I1124 14:06:28.491936  235400 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:06:28.507893  235400 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:06:28.522532  235400 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:06:28.644361  235400 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:06:28.820441  235400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:06:28.834877  235400 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:06:28.849856  235400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 14:06:28.870445  235400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 14:06:28.892395  235400 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1124 14:06:28.892508  235400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1124 14:06:28.902284  235400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 14:06:28.911805  235400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 14:06:28.921770  235400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 14:06:28.931877  235400 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:06:28.947257  235400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 14:06:28.962546  235400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 14:06:28.974455  235400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 14:06:28.987119  235400 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:06:28.997065  235400 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:06:29.011626  235400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:06:29.187221  235400 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 14:06:29.415089  235400 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 14:06:29.415160  235400 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 14:06:29.419653  235400 start.go:564] Will wait 60s for crictl version
	I1124 14:06:29.419735  235400 ssh_runner.go:195] Run: which crictl
	I1124 14:06:29.423855  235400 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 14:06:29.468000  235400 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 14:06:29.468072  235400 ssh_runner.go:195] Run: containerd --version
	I1124 14:06:29.505604  235400 ssh_runner.go:195] Run: containerd --version
	I1124 14:06:29.537267  235400 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1124 14:06:29.540365  235400 cli_runner.go:164] Run: docker network inspect newest-cni-857121 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:06:29.557072  235400 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 14:06:29.561801  235400 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:06:29.575481  235400 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1124 14:06:29.578337  235400 kubeadm.go:884] updating cluster {Name:newest-cni-857121 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-857121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:06:29.578521  235400 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 14:06:29.578615  235400 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:06:29.606209  235400 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 14:06:29.606234  235400 containerd.go:534] Images already preloaded, skipping extraction
	I1124 14:06:29.606290  235400 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:06:29.631837  235400 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 14:06:29.631857  235400 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:06:29.631885  235400 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1124 14:06:29.632037  235400 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-857121 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-857121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 14:06:29.632113  235400 ssh_runner.go:195] Run: sudo crictl info
	I1124 14:06:29.659140  235400 cni.go:84] Creating CNI manager for ""
	I1124 14:06:29.659168  235400 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 14:06:29.659186  235400 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1124 14:06:29.659238  235400 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-857121 NodeName:newest-cni-857121 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:06:29.659398  235400 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-857121"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:06:29.659520  235400 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 14:06:29.667813  235400 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:06:29.667903  235400 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:06:29.675668  235400 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1124 14:06:29.689719  235400 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:06:29.702651  235400 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1124 14:06:29.715812  235400 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 14:06:29.720249  235400 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:06:29.730658  235400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:06:29.843005  235400 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:06:29.864712  235400 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/newest-cni-857121 for IP: 192.168.76.2
	I1124 14:06:29.864777  235400 certs.go:195] generating shared ca certs ...
	I1124 14:06:29.864807  235400 certs.go:227] acquiring lock for ca certs: {Name:mkcd8707c782acde0e57168c044a3df942dc4ab1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:06:29.864990  235400 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-2368/.minikube/ca.key
	I1124 14:06:29.865074  235400 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.key
	I1124 14:06:29.865109  235400 certs.go:257] generating profile certs ...
	I1124 14:06:29.865241  235400 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/newest-cni-857121/client.key
	I1124 14:06:29.865354  235400 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/newest-cni-857121/apiserver.key.3d18bc4e
	I1124 14:06:29.865430  235400 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/newest-cni-857121/proxy-client.key
	I1124 14:06:29.865574  235400 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178.pem (1338 bytes)
	W1124 14:06:29.865635  235400 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178_empty.pem, impossibly tiny 0 bytes
	I1124 14:06:29.865660  235400 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca-key.pem (1679 bytes)
	I1124 14:06:29.865716  235400 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/ca.pem (1082 bytes)
	I1124 14:06:29.865769  235400 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:06:29.865838  235400 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/certs/key.pem (1679 bytes)
	I1124 14:06:29.865911  235400 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem (1708 bytes)
	I1124 14:06:29.866524  235400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:06:29.903487  235400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 14:06:29.921236  235400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:06:29.943096  235400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 14:06:29.963182  235400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/newest-cni-857121/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 14:06:29.981910  235400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/newest-cni-857121/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 14:06:30.003450  235400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/newest-cni-857121/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:06:30.055058  235400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/newest-cni-857121/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 14:06:30.093655  235400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:06:30.120495  235400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/certs/4178.pem --> /usr/share/ca-certificates/4178.pem (1338 bytes)
	I1124 14:06:30.151546  235400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/ssl/certs/41782.pem --> /usr/share/ca-certificates/41782.pem (1708 bytes)
	I1124 14:06:30.176108  235400 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:06:30.192501  235400 ssh_runner.go:195] Run: openssl version
	I1124 14:06:30.200265  235400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:06:30.211189  235400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:06:30.216803  235400 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:06:30.216918  235400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:06:30.259053  235400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:06:30.269361  235400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4178.pem && ln -fs /usr/share/ca-certificates/4178.pem /etc/ssl/certs/4178.pem"
	I1124 14:06:30.278435  235400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4178.pem
	I1124 14:06:30.282721  235400 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:21 /usr/share/ca-certificates/4178.pem
	I1124 14:06:30.282784  235400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4178.pem
	I1124 14:06:30.327823  235400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4178.pem /etc/ssl/certs/51391683.0"
	I1124 14:06:30.340058  235400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41782.pem && ln -fs /usr/share/ca-certificates/41782.pem /etc/ssl/certs/41782.pem"
	I1124 14:06:30.356069  235400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41782.pem
	I1124 14:06:30.365019  235400 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:21 /usr/share/ca-certificates/41782.pem
	I1124 14:06:30.365135  235400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41782.pem
	I1124 14:06:30.418478  235400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:06:30.427874  235400 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:06:30.432928  235400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 14:06:30.484018  235400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 14:06:30.528958  235400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 14:06:30.589492  235400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 14:06:30.661004  235400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 14:06:30.739216  235400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 14:06:30.788219  235400 kubeadm.go:401] StartCluster: {Name:newest-cni-857121 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-857121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:06:30.788363  235400 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 14:06:30.788454  235400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:06:30.836371  235400 cri.go:89] found id: "eb871d89cd5e948c4d84040f2d4aa32641805c95634013b9462001fe0382e0c9"
	I1124 14:06:30.836441  235400 cri.go:89] found id: "b6a304e9fb09cd1db4fbc11b5dd31f3acce5a1a3022eb074b42a315ca1e82086"
	I1124 14:06:30.836459  235400 cri.go:89] found id: "59e91bd28f90fe20ae4b82a4cd08659f853e4388bea2a999db98ab765846ddc3"
	I1124 14:06:30.836479  235400 cri.go:89] found id: "b5cee0c845b1dde6923fad318bea36673786272efac45da0f0e2408839f8e616"
	I1124 14:06:30.836512  235400 cri.go:89] found id: "ae949c667bf9fd7639deaa62564db2f337aad1991006aab3f0723beda73ec2a8"
	I1124 14:06:30.836534  235400 cri.go:89] found id: "f744aa236155f07c53bb9291c0262658983a9a1383882435000f85ffb50add93"
	I1124 14:06:30.836552  235400 cri.go:89] found id: "8fcfe99b64c08902243255e1ee54151f68e1041e65e4f64280d86522f3a6ce65"
	I1124 14:06:30.836577  235400 cri.go:89] found id: ""
	I1124 14:06:30.836658  235400 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1124 14:06:30.902403  235400 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"3f57d92d24249ee314beb96359ff1770e8d76d410363503c99375f9096e27927","pid":872,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f57d92d24249ee314beb96359ff1770e8d76d410363503c99375f9096e27927","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f57d92d24249ee314beb96359ff1770e8d76d410363503c99375f9096e27927/rootfs","created":"2025-11-24T14:06:30.62320064Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"3f57d92d24249ee314beb96359ff1770e8d76d410363503c99375f9096e27927","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-857121_c3123a5a08e2d290f5820d89ac99270c","io.kubernetes.cri.sandbox-memory":"0","io.
kubernetes.cri.sandbox-name":"etcd-newest-cni-857121","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"c3123a5a08e2d290f5820d89ac99270c"},"owner":"root"},{"ociVersion":"1.2.1","id":"a7cd511b9f2912a49ca6399ffeb9958189a43c91bca223a42e10a9e64aad5e9f","pid":949,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a7cd511b9f2912a49ca6399ffeb9958189a43c91bca223a42e10a9e64aad5e9f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a7cd511b9f2912a49ca6399ffeb9958189a43c91bca223a42e10a9e64aad5e9f/rootfs","created":"2025-11-24T14:06:30.761984655Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"a7cd511b9f2912a49ca6399ffeb9958189a43c91bca223a42e10a9e64aad5e9f","io.kubernetes.cri.sandbox-log-direct
ory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-857121_b9114fd5f572f1b4f3ce4742faffb1a9","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-857121","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b9114fd5f572f1b4f3ce4742faffb1a9"},"owner":"root"},{"ociVersion":"1.2.1","id":"dd9a85d02b11fefa8276826800c4a0b5425c8af1963f4fb8fbc5bc0b711d07bc","pid":940,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd9a85d02b11fefa8276826800c4a0b5425c8af1963f4fb8fbc5bc0b711d07bc","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd9a85d02b11fefa8276826800c4a0b5425c8af1963f4fb8fbc5bc0b711d07bc/rootfs","created":"2025-11-24T14:06:30.711615097Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubern
etes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"dd9a85d02b11fefa8276826800c4a0b5425c8af1963f4fb8fbc5bc0b711d07bc","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-857121_e0fa7adda6888197fd2112da2d2a0462","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-857121","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e0fa7adda6888197fd2112da2d2a0462"},"owner":"root"},{"ociVersion":"1.2.1","id":"eb871d89cd5e948c4d84040f2d4aa32641805c95634013b9462001fe0382e0c9","pid":0,"status":"stopped","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/eb871d89cd5e948c4d84040f2d4aa32641805c95634013b9462001fe0382e0c9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/eb871d89cd5e948c4d84040f2d4aa32641805c95634013b9462001fe0382e0c9/rootfs","created":"0001-01-01T00:00:00Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"
container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"3f57d92d24249ee314beb96359ff1770e8d76d410363503c99375f9096e27927","io.kubernetes.cri.sandbox-name":"etcd-newest-cni-857121","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"c3123a5a08e2d290f5820d89ac99270c"},"owner":"root"},{"ociVersion":"1.2.1","id":"fdce93a66142b8a767299aceb924daa2731afc5563d8db2a18b0ef64972dce95","pid":917,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdce93a66142b8a767299aceb924daa2731afc5563d8db2a18b0ef64972dce95","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdce93a66142b8a767299aceb924daa2731afc5563d8db2a18b0ef64972dce95/rootfs","created":"2025-11-24T14:06:30.659177742Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.k
ubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"fdce93a66142b8a767299aceb924daa2731afc5563d8db2a18b0ef64972dce95","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-857121_f724dc13bf9ce1da4fc8fc56c46d34f3","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-857121","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f724dc13bf9ce1da4fc8fc56c46d34f3"},"owner":"root"}]
	I1124 14:06:30.902617  235400 cri.go:126] list returned 5 containers
	I1124 14:06:30.902646  235400 cri.go:129] container: {ID:3f57d92d24249ee314beb96359ff1770e8d76d410363503c99375f9096e27927 Status:running}
	I1124 14:06:30.902685  235400 cri.go:131] skipping 3f57d92d24249ee314beb96359ff1770e8d76d410363503c99375f9096e27927 - not in ps
	I1124 14:06:30.902707  235400 cri.go:129] container: {ID:a7cd511b9f2912a49ca6399ffeb9958189a43c91bca223a42e10a9e64aad5e9f Status:running}
	I1124 14:06:30.902742  235400 cri.go:131] skipping a7cd511b9f2912a49ca6399ffeb9958189a43c91bca223a42e10a9e64aad5e9f - not in ps
	I1124 14:06:30.902762  235400 cri.go:129] container: {ID:dd9a85d02b11fefa8276826800c4a0b5425c8af1963f4fb8fbc5bc0b711d07bc Status:running}
	I1124 14:06:30.902784  235400 cri.go:131] skipping dd9a85d02b11fefa8276826800c4a0b5425c8af1963f4fb8fbc5bc0b711d07bc - not in ps
	I1124 14:06:30.902816  235400 cri.go:129] container: {ID:eb871d89cd5e948c4d84040f2d4aa32641805c95634013b9462001fe0382e0c9 Status:stopped}
	I1124 14:06:30.902841  235400 cri.go:135] skipping {eb871d89cd5e948c4d84040f2d4aa32641805c95634013b9462001fe0382e0c9 stopped}: state = "stopped", want "paused"
	I1124 14:06:30.902863  235400 cri.go:129] container: {ID:fdce93a66142b8a767299aceb924daa2731afc5563d8db2a18b0ef64972dce95 Status:running}
	I1124 14:06:30.902885  235400 cri.go:131] skipping fdce93a66142b8a767299aceb924daa2731afc5563d8db2a18b0ef64972dce95 - not in ps
	I1124 14:06:30.902962  235400 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:06:30.912866  235400 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 14:06:30.912928  235400 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 14:06:30.913003  235400 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 14:06:30.923008  235400 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 14:06:30.923615  235400 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-857121" does not appear in /home/jenkins/minikube-integration/21932-2368/kubeconfig
	I1124 14:06:30.923959  235400 kubeconfig.go:62] /home/jenkins/minikube-integration/21932-2368/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-857121" cluster setting kubeconfig missing "newest-cni-857121" context setting]
	I1124 14:06:30.924760  235400 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/kubeconfig: {Name:mk246d21eaffbd8aca2abdc1b2f89d6fcc902f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:06:30.926852  235400 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 14:06:30.943302  235400 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1124 14:06:30.943384  235400 kubeadm.go:602] duration metric: took 30.437187ms to restartPrimaryControlPlane
	I1124 14:06:30.943409  235400 kubeadm.go:403] duration metric: took 155.199835ms to StartCluster
	I1124 14:06:30.943450  235400 settings.go:142] acquiring lock: {Name:mk2b0bbff4d8ced468f457362668d43b813dc062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:06:30.943551  235400 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-2368/kubeconfig
	I1124 14:06:30.944685  235400 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-2368/kubeconfig: {Name:mk246d21eaffbd8aca2abdc1b2f89d6fcc902f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:06:30.944969  235400 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 14:06:30.945460  235400 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:06:30.945567  235400 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-857121"
	I1124 14:06:30.945589  235400 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-857121"
	W1124 14:06:30.945599  235400 addons.go:248] addon storage-provisioner should already be in state true
	I1124 14:06:30.945613  235400 addons.go:70] Setting dashboard=true in profile "newest-cni-857121"
	I1124 14:06:30.945641  235400 addons.go:239] Setting addon dashboard=true in "newest-cni-857121"
	I1124 14:06:30.945664  235400 addons.go:70] Setting metrics-server=true in profile "newest-cni-857121"
	I1124 14:06:30.945679  235400 addons.go:239] Setting addon metrics-server=true in "newest-cni-857121"
	W1124 14:06:30.945685  235400 addons.go:248] addon metrics-server should already be in state true
	I1124 14:06:30.945714  235400 host.go:66] Checking if "newest-cni-857121" exists ...
	W1124 14:06:30.945739  235400 addons.go:248] addon dashboard should already be in state true
	I1124 14:06:30.945783  235400 host.go:66] Checking if "newest-cni-857121" exists ...
	I1124 14:06:30.946219  235400 cli_runner.go:164] Run: docker container inspect newest-cni-857121 --format={{.State.Status}}
	I1124 14:06:30.946384  235400 cli_runner.go:164] Run: docker container inspect newest-cni-857121 --format={{.State.Status}}
	I1124 14:06:30.945647  235400 addons.go:70] Setting default-storageclass=true in profile "newest-cni-857121"
	I1124 14:06:30.950782  235400 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-857121"
	I1124 14:06:30.945627  235400 host.go:66] Checking if "newest-cni-857121" exists ...
	I1124 14:06:30.945525  235400 config.go:182] Loaded profile config "newest-cni-857121": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 14:06:30.953433  235400 cli_runner.go:164] Run: docker container inspect newest-cni-857121 --format={{.State.Status}}
	I1124 14:06:30.964949  235400 cli_runner.go:164] Run: docker container inspect newest-cni-857121 --format={{.State.Status}}
	I1124 14:06:30.970541  235400 out.go:179] * Verifying Kubernetes components...
	I1124 14:06:30.977958  235400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:06:30.984052  235400 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 14:06:30.994615  235400 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 14:06:30.998506  235400 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1124 14:06:31.000070  235400 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 14:06:31.000099  235400 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 14:06:31.000210  235400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-857121
	I1124 14:06:31.006429  235400 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1124 14:06:31.006466  235400 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1124 14:06:31.006550  235400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-857121
	I1124 14:06:31.029382  235400 addons.go:239] Setting addon default-storageclass=true in "newest-cni-857121"
	W1124 14:06:31.029405  235400 addons.go:248] addon default-storageclass should already be in state true
	I1124 14:06:31.029433  235400 host.go:66] Checking if "newest-cni-857121" exists ...
	I1124 14:06:31.029855  235400 cli_runner.go:164] Run: docker container inspect newest-cni-857121 --format={{.State.Status}}
	I1124 14:06:31.052917  235400 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 14:06:29.018447  227999 system_pods.go:86] 8 kube-system pods found
	I1124 14:06:29.018533  227999 system_pods.go:89] "coredns-66bc5c9577-mlv2v" [67b42ac3-2efb-4c75-bf32-69997934054f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:06:29.018601  227999 system_pods.go:89] "etcd-no-preload-694102" [ad1ec9e9-8ac6-4935-846d-61da57247663] Running
	I1124 14:06:29.018627  227999 system_pods.go:89] "kindnet-7c59v" [8a5b0f0c-0045-46ca-b1fb-3b810431b3c1] Running
	I1124 14:06:29.018651  227999 system_pods.go:89] "kube-apiserver-no-preload-694102" [bf827f89-f5fe-412c-9dde-002f07d7813f] Running
	I1124 14:06:29.018689  227999 system_pods.go:89] "kube-controller-manager-no-preload-694102" [23d3e702-29bf-47e6-a3a6-02bac428205b] Running
	I1124 14:06:29.018715  227999 system_pods.go:89] "kube-proxy-zfqkk" [25554c4e-42ac-44c3-b789-ec859d575750] Running
	I1124 14:06:29.018736  227999 system_pods.go:89] "kube-scheduler-no-preload-694102" [ca86e2f7-72b9-4ebf-a6b1-9d6d1fcdeb26] Running
	I1124 14:06:29.018776  227999 system_pods.go:89] "storage-provisioner" [c7650495-d936-47cd-8950-19783ba64e6c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:06:29.018820  227999 retry.go:31] will retry after 327.766684ms: missing components: kube-dns
	I1124 14:06:29.354946  227999 system_pods.go:86] 8 kube-system pods found
	I1124 14:06:29.355029  227999 system_pods.go:89] "coredns-66bc5c9577-mlv2v" [67b42ac3-2efb-4c75-bf32-69997934054f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:06:29.355052  227999 system_pods.go:89] "etcd-no-preload-694102" [ad1ec9e9-8ac6-4935-846d-61da57247663] Running
	I1124 14:06:29.355094  227999 system_pods.go:89] "kindnet-7c59v" [8a5b0f0c-0045-46ca-b1fb-3b810431b3c1] Running
	I1124 14:06:29.355118  227999 system_pods.go:89] "kube-apiserver-no-preload-694102" [bf827f89-f5fe-412c-9dde-002f07d7813f] Running
	I1124 14:06:29.355141  227999 system_pods.go:89] "kube-controller-manager-no-preload-694102" [23d3e702-29bf-47e6-a3a6-02bac428205b] Running
	I1124 14:06:29.355179  227999 system_pods.go:89] "kube-proxy-zfqkk" [25554c4e-42ac-44c3-b789-ec859d575750] Running
	I1124 14:06:29.355203  227999 system_pods.go:89] "kube-scheduler-no-preload-694102" [ca86e2f7-72b9-4ebf-a6b1-9d6d1fcdeb26] Running
	I1124 14:06:29.355223  227999 system_pods.go:89] "storage-provisioner" [c7650495-d936-47cd-8950-19783ba64e6c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:06:29.355272  227999 retry.go:31] will retry after 421.54231ms: missing components: kube-dns
	I1124 14:06:29.782830  227999 system_pods.go:86] 8 kube-system pods found
	I1124 14:06:29.782864  227999 system_pods.go:89] "coredns-66bc5c9577-mlv2v" [67b42ac3-2efb-4c75-bf32-69997934054f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:06:29.782871  227999 system_pods.go:89] "etcd-no-preload-694102" [ad1ec9e9-8ac6-4935-846d-61da57247663] Running
	I1124 14:06:29.782877  227999 system_pods.go:89] "kindnet-7c59v" [8a5b0f0c-0045-46ca-b1fb-3b810431b3c1] Running
	I1124 14:06:29.782882  227999 system_pods.go:89] "kube-apiserver-no-preload-694102" [bf827f89-f5fe-412c-9dde-002f07d7813f] Running
	I1124 14:06:29.782886  227999 system_pods.go:89] "kube-controller-manager-no-preload-694102" [23d3e702-29bf-47e6-a3a6-02bac428205b] Running
	I1124 14:06:29.782913  227999 system_pods.go:89] "kube-proxy-zfqkk" [25554c4e-42ac-44c3-b789-ec859d575750] Running
	I1124 14:06:29.782918  227999 system_pods.go:89] "kube-scheduler-no-preload-694102" [ca86e2f7-72b9-4ebf-a6b1-9d6d1fcdeb26] Running
	I1124 14:06:29.782924  227999 system_pods.go:89] "storage-provisioner" [c7650495-d936-47cd-8950-19783ba64e6c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:06:29.782939  227999 retry.go:31] will retry after 500.534424ms: missing components: kube-dns
	I1124 14:06:30.288639  227999 system_pods.go:86] 8 kube-system pods found
	I1124 14:06:30.288671  227999 system_pods.go:89] "coredns-66bc5c9577-mlv2v" [67b42ac3-2efb-4c75-bf32-69997934054f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:06:30.288678  227999 system_pods.go:89] "etcd-no-preload-694102" [ad1ec9e9-8ac6-4935-846d-61da57247663] Running
	I1124 14:06:30.288688  227999 system_pods.go:89] "kindnet-7c59v" [8a5b0f0c-0045-46ca-b1fb-3b810431b3c1] Running
	I1124 14:06:30.288693  227999 system_pods.go:89] "kube-apiserver-no-preload-694102" [bf827f89-f5fe-412c-9dde-002f07d7813f] Running
	I1124 14:06:30.288699  227999 system_pods.go:89] "kube-controller-manager-no-preload-694102" [23d3e702-29bf-47e6-a3a6-02bac428205b] Running
	I1124 14:06:30.288702  227999 system_pods.go:89] "kube-proxy-zfqkk" [25554c4e-42ac-44c3-b789-ec859d575750] Running
	I1124 14:06:30.288706  227999 system_pods.go:89] "kube-scheduler-no-preload-694102" [ca86e2f7-72b9-4ebf-a6b1-9d6d1fcdeb26] Running
	I1124 14:06:30.288712  227999 system_pods.go:89] "storage-provisioner" [c7650495-d936-47cd-8950-19783ba64e6c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:06:30.288727  227999 retry.go:31] will retry after 539.708185ms: missing components: kube-dns
	I1124 14:06:30.833453  227999 system_pods.go:86] 8 kube-system pods found
	I1124 14:06:30.833479  227999 system_pods.go:89] "coredns-66bc5c9577-mlv2v" [67b42ac3-2efb-4c75-bf32-69997934054f] Running
	I1124 14:06:30.833485  227999 system_pods.go:89] "etcd-no-preload-694102" [ad1ec9e9-8ac6-4935-846d-61da57247663] Running
	I1124 14:06:30.833490  227999 system_pods.go:89] "kindnet-7c59v" [8a5b0f0c-0045-46ca-b1fb-3b810431b3c1] Running
	I1124 14:06:30.833495  227999 system_pods.go:89] "kube-apiserver-no-preload-694102" [bf827f89-f5fe-412c-9dde-002f07d7813f] Running
	I1124 14:06:30.833500  227999 system_pods.go:89] "kube-controller-manager-no-preload-694102" [23d3e702-29bf-47e6-a3a6-02bac428205b] Running
	I1124 14:06:30.833503  227999 system_pods.go:89] "kube-proxy-zfqkk" [25554c4e-42ac-44c3-b789-ec859d575750] Running
	I1124 14:06:30.833507  227999 system_pods.go:89] "kube-scheduler-no-preload-694102" [ca86e2f7-72b9-4ebf-a6b1-9d6d1fcdeb26] Running
	I1124 14:06:30.833511  227999 system_pods.go:89] "storage-provisioner" [c7650495-d936-47cd-8950-19783ba64e6c] Running
	I1124 14:06:30.833518  227999 system_pods.go:126] duration metric: took 2.063993383s to wait for k8s-apps to be running ...
	I1124 14:06:30.833530  227999 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 14:06:30.833578  227999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:06:30.852325  227999 system_svc.go:56] duration metric: took 18.790098ms WaitForService to wait for kubelet
	I1124 14:06:30.852352  227999 kubeadm.go:587] duration metric: took 16.818835807s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:06:30.852372  227999 node_conditions.go:102] verifying NodePressure condition ...
	I1124 14:06:30.857200  227999 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 14:06:30.857232  227999 node_conditions.go:123] node cpu capacity is 2
	I1124 14:06:30.857244  227999 node_conditions.go:105] duration metric: took 4.866915ms to run NodePressure ...
	I1124 14:06:30.857257  227999 start.go:242] waiting for startup goroutines ...
	I1124 14:06:30.857265  227999 start.go:247] waiting for cluster config update ...
	I1124 14:06:30.857277  227999 start.go:256] writing updated cluster config ...
	I1124 14:06:30.857517  227999 ssh_runner.go:195] Run: rm -f paused
	I1124 14:06:30.861770  227999 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:06:30.865665  227999 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mlv2v" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:06:30.878488  227999 pod_ready.go:94] pod "coredns-66bc5c9577-mlv2v" is "Ready"
	I1124 14:06:30.878513  227999 pod_ready.go:86] duration metric: took 12.775741ms for pod "coredns-66bc5c9577-mlv2v" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:06:30.882834  227999 pod_ready.go:83] waiting for pod "etcd-no-preload-694102" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:06:30.890795  227999 pod_ready.go:94] pod "etcd-no-preload-694102" is "Ready"
	I1124 14:06:30.890874  227999 pod_ready.go:86] duration metric: took 7.965073ms for pod "etcd-no-preload-694102" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:06:30.894169  227999 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-694102" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:06:30.900717  227999 pod_ready.go:94] pod "kube-apiserver-no-preload-694102" is "Ready"
	I1124 14:06:30.900792  227999 pod_ready.go:86] duration metric: took 6.547802ms for pod "kube-apiserver-no-preload-694102" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:06:30.903530  227999 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-694102" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:06:31.266874  227999 pod_ready.go:94] pod "kube-controller-manager-no-preload-694102" is "Ready"
	I1124 14:06:31.266916  227999 pod_ready.go:86] duration metric: took 363.323272ms for pod "kube-controller-manager-no-preload-694102" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:06:31.467590  227999 pod_ready.go:83] waiting for pod "kube-proxy-zfqkk" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:06:31.867036  227999 pod_ready.go:94] pod "kube-proxy-zfqkk" is "Ready"
	I1124 14:06:31.867083  227999 pod_ready.go:86] duration metric: took 399.459252ms for pod "kube-proxy-zfqkk" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:06:32.069097  227999 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-694102" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:06:32.466481  227999 pod_ready.go:94] pod "kube-scheduler-no-preload-694102" is "Ready"
	I1124 14:06:32.466585  227999 pod_ready.go:86] duration metric: took 397.453444ms for pod "kube-scheduler-no-preload-694102" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:06:32.466625  227999 pod_ready.go:40] duration metric: took 1.604820905s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:06:32.596555  227999 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 14:06:32.600552  227999 out.go:179] * Done! kubectl is now configured to use "no-preload-694102" cluster and "default" namespace by default
	I1124 14:06:31.064155  235400 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:06:31.064185  235400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:06:31.064264  235400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-857121
	I1124 14:06:31.064990  235400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/newest-cni-857121/id_rsa Username:docker}
	I1124 14:06:31.097736  235400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/newest-cni-857121/id_rsa Username:docker}
	I1124 14:06:31.102690  235400 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 14:06:31.102721  235400 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 14:06:31.102799  235400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-857121
	I1124 14:06:31.119615  235400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/newest-cni-857121/id_rsa Username:docker}
	I1124 14:06:31.142704  235400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/newest-cni-857121/id_rsa Username:docker}
	I1124 14:06:31.390354  235400 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:06:31.467442  235400 api_server.go:52] waiting for apiserver process to appear ...
	I1124 14:06:31.467564  235400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:06:31.610058  235400 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 14:06:31.610134  235400 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 14:06:31.615373  235400 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1124 14:06:31.615448  235400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1124 14:06:31.723477  235400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:06:31.731571  235400 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1124 14:06:31.731653  235400 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1124 14:06:31.744390  235400 api_server.go:72] duration metric: took 799.322462ms to wait for apiserver process to appear ...
	I1124 14:06:31.744469  235400 api_server.go:88] waiting for apiserver healthz status ...
	I1124 14:06:31.744503  235400 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 14:06:31.810747  235400 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 14:06:31.810773  235400 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 14:06:31.857128  235400 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 14:06:31.857155  235400 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1124 14:06:31.935644  235400 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 14:06:31.935682  235400 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 14:06:31.942990  235400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 14:06:32.003180  235400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 14:06:32.068551  235400 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 14:06:32.068627  235400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 14:06:32.256687  235400 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 14:06:32.256726  235400 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 14:06:32.433857  235400 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 14:06:32.433884  235400 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 14:06:32.540008  235400 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 14:06:32.540036  235400 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 14:06:32.596368  235400 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 14:06:32.596392  235400 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 14:06:32.665696  235400 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 14:06:32.665724  235400 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 14:06:32.710186  235400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 14:06:36.051889  235400 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1124 14:06:36.051948  235400 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1124 14:06:36.051962  235400 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 14:06:36.117489  235400 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1124 14:06:36.117525  235400 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1124 14:06:36.244761  235400 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 14:06:36.288250  235400 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 14:06:36.288293  235400 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 14:06:36.745350  235400 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 14:06:36.757159  235400 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 14:06:36.757221  235400 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 14:06:37.244994  235400 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 14:06:37.263930  235400 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 14:06:37.263963  235400 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 14:06:37.745544  235400 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 14:06:37.757914  235400 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1124 14:06:37.775255  235400 api_server.go:141] control plane version: v1.34.1
	I1124 14:06:37.775289  235400 api_server.go:131] duration metric: took 6.03079911s to wait for apiserver health ...
	I1124 14:06:37.775299  235400 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 14:06:37.778676  235400 system_pods.go:59] 9 kube-system pods found
	I1124 14:06:37.778720  235400 system_pods.go:61] "coredns-66bc5c9577-7cvsd" [8ad2bcc7-9b08-4dbf-96ea-085c122c93fb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 14:06:37.778729  235400 system_pods.go:61] "etcd-newest-cni-857121" [67b2a78b-0f58-4e0c-84c7-ec276aa97a60] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 14:06:37.778738  235400 system_pods.go:61] "kindnet-bzm6b" [47fdf238-33fb-4c18-a881-22172364295c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 14:06:37.778745  235400 system_pods.go:61] "kube-apiserver-newest-cni-857121" [0e18cf88-bc3f-44ff-96a1-2a16cf94054f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 14:06:37.778757  235400 system_pods.go:61] "kube-controller-manager-newest-cni-857121" [527b0021-5bab-4be8-ab0e-99fa9e3d3df2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 14:06:37.778764  235400 system_pods.go:61] "kube-proxy-w5bpl" [75605361-5fe9-4923-a314-adb4fac2221e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 14:06:37.778771  235400 system_pods.go:61] "kube-scheduler-newest-cni-857121" [01fdce1c-eecc-4d81-8a77-f0e3539d35ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 14:06:37.778782  235400 system_pods.go:61] "metrics-server-746fcd58dc-dnztb" [f7e92bcb-99d7-454b-8b2f-fd08fbc7f265] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 14:06:37.778787  235400 system_pods.go:61] "storage-provisioner" [74025269-2080-4988-9f1c-68e348fe7124] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 14:06:37.778794  235400 system_pods.go:74] duration metric: took 3.488767ms to wait for pod list to return data ...
	I1124 14:06:37.778807  235400 default_sa.go:34] waiting for default service account to be created ...
	I1124 14:06:37.785430  235400 default_sa.go:45] found service account: "default"
	I1124 14:06:37.785468  235400 default_sa.go:55] duration metric: took 6.655127ms for default service account to be created ...
	I1124 14:06:37.785481  235400 kubeadm.go:587] duration metric: took 6.840419527s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 14:06:37.785498  235400 node_conditions.go:102] verifying NodePressure condition ...
	I1124 14:06:37.788149  235400 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 14:06:37.788186  235400 node_conditions.go:123] node cpu capacity is 2
	I1124 14:06:37.788199  235400 node_conditions.go:105] duration metric: took 2.696676ms to run NodePressure ...
	I1124 14:06:37.788219  235400 start.go:242] waiting for startup goroutines ...
	I1124 14:06:38.513008  235400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.789441789s)
	I1124 14:06:38.513091  235400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.570058367s)
	I1124 14:06:38.513377  235400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.510154301s)
	I1124 14:06:38.513406  235400 addons.go:495] Verifying addon metrics-server=true in "newest-cni-857121"
	I1124 14:06:38.513498  235400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.803278284s)
	I1124 14:06:38.516654  235400 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-857121 addons enable metrics-server
	
	I1124 14:06:38.522865  235400 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I1124 14:06:38.526062  235400 addons.go:530] duration metric: took 7.580606953s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I1124 14:06:38.526147  235400 start.go:247] waiting for cluster config update ...
	I1124 14:06:38.526180  235400 start.go:256] writing updated cluster config ...
	I1124 14:06:38.526495  235400 ssh_runner.go:195] Run: rm -f paused
	I1124 14:06:38.615647  235400 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 14:06:38.619536  235400 out.go:179] * Done! kubectl is now configured to use "newest-cni-857121" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	163eb87995029       1611cd07b61d5       8 seconds ago       Running             busybox                   0                   51762fb9df68c       busybox                                     default
	946f57c26f2b7       138784d87c9c5       14 seconds ago      Running             coredns                   0                   08de473ffdbeb       coredns-66bc5c9577-mlv2v                    kube-system
	d6e8512e69a33       66749159455b3       14 seconds ago      Running             storage-provisioner       0                   61b24dad847bd       storage-provisioner                         kube-system
	a1db3bb697698       b1a8c6f707935       25 seconds ago      Running             kindnet-cni               0                   a63d18f034f26       kindnet-7c59v                               kube-system
	9d3e142da2f58       05baa95f5142d       28 seconds ago      Running             kube-proxy                0                   0e9a45f087804       kube-proxy-zfqkk                            kube-system
	e292eee04d520       a1894772a478e       47 seconds ago      Running             etcd                      0                   b465546117c08       etcd-no-preload-694102                      kube-system
	3419b4aa1d824       7eb2c6ff0c5a7       47 seconds ago      Running             kube-controller-manager   0                   573b593651283       kube-controller-manager-no-preload-694102   kube-system
	87cd43fd53099       b5f57ec6b9867       47 seconds ago      Running             kube-scheduler            0                   44c7f6f32a448       kube-scheduler-no-preload-694102            kube-system
	6091a6176c58f       43911e833d64d       47 seconds ago      Running             kube-apiserver            0                   ba0521ba90d26       kube-apiserver-no-preload-694102            kube-system
	
	
	==> containerd <==
	Nov 24 14:06:29 no-preload-694102 containerd[757]: time="2025-11-24T14:06:29.228719962Z" level=info msg="CreateContainer within sandbox \"61b24dad847bd3f04d6774832019e6baba904f093144a79f4b695b0860665431\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"d6e8512e69a330afbd862c90d0ccc7512efbe7a0737efdd4514e6d7148ba97ae\""
	Nov 24 14:06:29 no-preload-694102 containerd[757]: time="2025-11-24T14:06:29.229898953Z" level=info msg="StartContainer for \"d6e8512e69a330afbd862c90d0ccc7512efbe7a0737efdd4514e6d7148ba97ae\""
	Nov 24 14:06:29 no-preload-694102 containerd[757]: time="2025-11-24T14:06:29.231645958Z" level=info msg="connecting to shim d6e8512e69a330afbd862c90d0ccc7512efbe7a0737efdd4514e6d7148ba97ae" address="unix:///run/containerd/s/4f24a9ca1c31e9bc5f10c93738b3c2d5d3840ca2e0d860b65aff54598bc0df09" protocol=ttrpc version=3
	Nov 24 14:06:29 no-preload-694102 containerd[757]: time="2025-11-24T14:06:29.234632763Z" level=info msg="Container 946f57c26f2b7fb53897eb22e9eff868d9bd49eab227874672294881b6323429: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 14:06:29 no-preload-694102 containerd[757]: time="2025-11-24T14:06:29.269232194Z" level=info msg="CreateContainer within sandbox \"08de473ffdbeba53b447ef018345568dae4703c451627d915b546f132d6deda7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"946f57c26f2b7fb53897eb22e9eff868d9bd49eab227874672294881b6323429\""
	Nov 24 14:06:29 no-preload-694102 containerd[757]: time="2025-11-24T14:06:29.272948352Z" level=info msg="StartContainer for \"946f57c26f2b7fb53897eb22e9eff868d9bd49eab227874672294881b6323429\""
	Nov 24 14:06:29 no-preload-694102 containerd[757]: time="2025-11-24T14:06:29.274534157Z" level=info msg="connecting to shim 946f57c26f2b7fb53897eb22e9eff868d9bd49eab227874672294881b6323429" address="unix:///run/containerd/s/6c27632380d6ee7274590d004e922064863ee7fd90097024b8a099e5a6ba2f0c" protocol=ttrpc version=3
	Nov 24 14:06:29 no-preload-694102 containerd[757]: time="2025-11-24T14:06:29.407024789Z" level=info msg="StartContainer for \"d6e8512e69a330afbd862c90d0ccc7512efbe7a0737efdd4514e6d7148ba97ae\" returns successfully"
	Nov 24 14:06:29 no-preload-694102 containerd[757]: time="2025-11-24T14:06:29.447423575Z" level=info msg="StartContainer for \"946f57c26f2b7fb53897eb22e9eff868d9bd49eab227874672294881b6323429\" returns successfully"
	Nov 24 14:06:33 no-preload-694102 containerd[757]: time="2025-11-24T14:06:33.246056269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:d1d404ac-cc11-4eb6-ae07-b81ddad14d37,Namespace:default,Attempt:0,}"
	Nov 24 14:06:33 no-preload-694102 containerd[757]: time="2025-11-24T14:06:33.316532968Z" level=info msg="connecting to shim 51762fb9df68c598b0e8812b7b41ae5270359bef99e2bc2977d7786d7a7ac141" address="unix:///run/containerd/s/814dfc9f9acc2c18d7e8ab78382f15fd7334ca370b0c2367223d84c1c086a794" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 14:06:33 no-preload-694102 containerd[757]: time="2025-11-24T14:06:33.447537780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:d1d404ac-cc11-4eb6-ae07-b81ddad14d37,Namespace:default,Attempt:0,} returns sandbox id \"51762fb9df68c598b0e8812b7b41ae5270359bef99e2bc2977d7786d7a7ac141\""
	Nov 24 14:06:33 no-preload-694102 containerd[757]: time="2025-11-24T14:06:33.449649942Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 14:06:35 no-preload-694102 containerd[757]: time="2025-11-24T14:06:35.450478699Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 14:06:35 no-preload-694102 containerd[757]: time="2025-11-24T14:06:35.452455130Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937184"
	Nov 24 14:06:35 no-preload-694102 containerd[757]: time="2025-11-24T14:06:35.455231881Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 14:06:35 no-preload-694102 containerd[757]: time="2025-11-24T14:06:35.458537254Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 14:06:35 no-preload-694102 containerd[757]: time="2025-11-24T14:06:35.459425747Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.009608222s"
	Nov 24 14:06:35 no-preload-694102 containerd[757]: time="2025-11-24T14:06:35.459469719Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 24 14:06:35 no-preload-694102 containerd[757]: time="2025-11-24T14:06:35.472761059Z" level=info msg="CreateContainer within sandbox \"51762fb9df68c598b0e8812b7b41ae5270359bef99e2bc2977d7786d7a7ac141\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 14:06:35 no-preload-694102 containerd[757]: time="2025-11-24T14:06:35.498978221Z" level=info msg="Container 163eb87995029406ede5b540330a113fb481fc9dfb6cef1798d33c49676a6800: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 14:06:35 no-preload-694102 containerd[757]: time="2025-11-24T14:06:35.513700387Z" level=info msg="CreateContainer within sandbox \"51762fb9df68c598b0e8812b7b41ae5270359bef99e2bc2977d7786d7a7ac141\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"163eb87995029406ede5b540330a113fb481fc9dfb6cef1798d33c49676a6800\""
	Nov 24 14:06:35 no-preload-694102 containerd[757]: time="2025-11-24T14:06:35.516916314Z" level=info msg="StartContainer for \"163eb87995029406ede5b540330a113fb481fc9dfb6cef1798d33c49676a6800\""
	Nov 24 14:06:35 no-preload-694102 containerd[757]: time="2025-11-24T14:06:35.519991366Z" level=info msg="connecting to shim 163eb87995029406ede5b540330a113fb481fc9dfb6cef1798d33c49676a6800" address="unix:///run/containerd/s/814dfc9f9acc2c18d7e8ab78382f15fd7334ca370b0c2367223d84c1c086a794" protocol=ttrpc version=3
	Nov 24 14:06:35 no-preload-694102 containerd[757]: time="2025-11-24T14:06:35.607548769Z" level=info msg="StartContainer for \"163eb87995029406ede5b540330a113fb481fc9dfb6cef1798d33c49676a6800\" returns successfully"
	
	
	==> coredns [946f57c26f2b7fb53897eb22e9eff868d9bd49eab227874672294881b6323429] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53191 - 63475 "HINFO IN 3638190689505453823.4999176802452857747. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.052472119s
	
	
	==> describe nodes <==
	Name:               no-preload-694102
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-694102
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=no-preload-694102
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T14_06_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 14:06:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-694102
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:06:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:06:39 +0000   Mon, 24 Nov 2025 14:05:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:06:39 +0000   Mon, 24 Nov 2025 14:05:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:06:39 +0000   Mon, 24 Nov 2025 14:05:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 14:06:39 +0000   Mon, 24 Nov 2025 14:06:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-694102
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                2162c161-b033-4214-a405-fa28dbb15d11
	  Boot ID:                    dd480c26-e101-4930-b98c-54c06b430fdc
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-mlv2v                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     30s
	  kube-system                 etcd-no-preload-694102                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         38s
	  kube-system                 kindnet-7c59v                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-no-preload-694102             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-no-preload-694102    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-zfqkk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-no-preload-694102             100m (5%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 27s                kube-proxy       
	  Normal   NodeHasSufficientMemory  49s (x8 over 49s)  kubelet          Node no-preload-694102 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    49s (x8 over 49s)  kubelet          Node no-preload-694102 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     49s (x7 over 49s)  kubelet          Node no-preload-694102 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  49s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 36s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 36s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  35s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node no-preload-694102 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node no-preload-694102 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     35s                kubelet          Node no-preload-694102 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           31s                node-controller  Node no-preload-694102 event: Registered Node no-preload-694102 in Controller
	  Normal   NodeReady                16s                kubelet          Node no-preload-694102 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 12:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014697] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497291] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033884] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.804993] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.476130] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [e292eee04d5207d003d6a15401217d93e23a551eb753118c2698e918a79b2404] <==
	{"level":"warn","ts":"2025-11-24T14:06:01.850383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:01.917742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:01.951423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.008343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.084222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.124902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.160242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.215295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.259656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.286574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.328478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.350931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.392727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.458974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.481369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.517524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.543396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.581606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.606525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.625156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.641987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.698898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.738006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.768554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:03.016188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41272","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:06:44 up  1:49,  0 user,  load average: 5.81, 4.43, 3.49
	Linux no-preload-694102 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a1db3bb6976980eb84e4914f2a03aa12ecd36ce023d338d485de53742975835f] <==
	I1124 14:06:18.369066       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:06:18.369776       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 14:06:18.369997       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:06:18.370072       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:06:18.370231       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:06:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:06:18.576708       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:06:18.576820       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:06:18.576852       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:06:18.579759       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 14:06:18.777009       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 14:06:18.777261       1 metrics.go:72] Registering metrics
	I1124 14:06:18.777417       1 controller.go:711] "Syncing nftables rules"
	I1124 14:06:28.580120       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:06:28.580178       1 main.go:301] handling current node
	I1124 14:06:38.572127       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:06:38.572174       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6091a6176c58febadbf0290401cf3ff41e3afe336a9afda7a4379706800fe9d2] <==
	I1124 14:06:05.152804       1 policy_source.go:240] refreshing policies
	I1124 14:06:05.219289       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 14:06:05.328715       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 14:06:05.370104       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:06:05.376417       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 14:06:05.428643       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:06:05.444557       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 14:06:05.535899       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 14:06:05.609886       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 14:06:05.612755       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:06:07.268321       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 14:06:07.342975       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 14:06:07.481991       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 14:06:07.490174       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1124 14:06:07.491311       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 14:06:07.499394       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:06:08.087414       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 14:06:08.656785       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 14:06:08.682787       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 14:06:08.699685       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 14:06:13.955610       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:06:13.962571       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:06:14.208905       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 14:06:14.235146       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1124 14:06:42.261456       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:36424: use of closed network connection
	
	
	==> kube-controller-manager [3419b4aa1d8248de1aad4e86c2bc5857b2b75b353e5d4b5e4c6255159c6a872f] <==
	I1124 14:06:13.145797       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 14:06:13.147890       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 14:06:13.148148       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:06:13.148302       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 14:06:13.153366       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 14:06:13.154706       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 14:06:13.170298       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 14:06:13.176906       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:06:13.184348       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 14:06:13.189127       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 14:06:13.189747       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 14:06:13.190844       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 14:06:13.192395       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 14:06:13.195696       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:06:13.201214       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 14:06:13.211263       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 14:06:13.216173       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 14:06:13.216353       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 14:06:13.217106       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 14:06:13.217278       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 14:06:13.218571       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-694102" podCIDRs=["10.244.0.0/24"]
	I1124 14:06:13.230896       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:06:13.231102       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 14:06:13.231188       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 14:06:33.139882       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [9d3e142da2f58c9d760f3cde2d16603ca77a56cb78d10aaede8c43b7bc25d147] <==
	I1124 14:06:15.869803       1 server_linux.go:53] "Using iptables proxy"
	I1124 14:06:15.982566       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 14:06:16.084149       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 14:06:16.084189       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 14:06:16.084260       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 14:06:16.203193       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:06:16.203317       1 server_linux.go:132] "Using iptables Proxier"
	I1124 14:06:16.214958       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 14:06:16.215358       1 server.go:527] "Version info" version="v1.34.1"
	I1124 14:06:16.216542       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:06:16.219871       1 config.go:200] "Starting service config controller"
	I1124 14:06:16.219887       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 14:06:16.220023       1 config.go:106] "Starting endpoint slice config controller"
	I1124 14:06:16.220031       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 14:06:16.220048       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 14:06:16.220052       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 14:06:16.220678       1 config.go:309] "Starting node config controller"
	I1124 14:06:16.220686       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 14:06:16.220693       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 14:06:16.323563       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 14:06:16.323598       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 14:06:16.323657       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [87cd43fd5309925d217afbff2150e4e6af9a87b2a39838ad50ec55dd2be7ae20] <==
	E1124 14:06:05.298828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 14:06:05.299023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 14:06:05.299209       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 14:06:05.299594       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 14:06:05.316238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 14:06:06.185004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1124 14:06:06.193524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 14:06:06.222055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 14:06:06.245626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 14:06:06.253861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 14:06:06.294134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 14:06:06.307324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 14:06:06.377174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 14:06:06.416310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 14:06:06.462031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 14:06:06.472738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 14:06:06.505458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 14:06:06.505677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 14:06:06.511747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 14:06:06.545776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 14:06:06.632366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 14:06:06.794348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 14:06:06.850604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 14:06:06.904051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1124 14:06:08.428823       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 14:06:10 no-preload-694102 kubelet[2096]: I1124 14:06:10.048564    2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-694102" podStartSLOduration=4.048544985 podStartE2EDuration="4.048544985s" podCreationTimestamp="2025-11-24 14:06:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:06:09.965556004 +0000 UTC m=+1.361352439" watchObservedRunningTime="2025-11-24 14:06:10.048544985 +0000 UTC m=+1.444341412"
	Nov 24 14:06:10 no-preload-694102 kubelet[2096]: I1124 14:06:10.088801    2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-694102" podStartSLOduration=1.088782215 podStartE2EDuration="1.088782215s" podCreationTimestamp="2025-11-24 14:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:06:10.067685793 +0000 UTC m=+1.463482219" watchObservedRunningTime="2025-11-24 14:06:10.088782215 +0000 UTC m=+1.484578650"
	Nov 24 14:06:10 no-preload-694102 kubelet[2096]: I1124 14:06:10.116526    2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-694102" podStartSLOduration=1.116508796 podStartE2EDuration="1.116508796s" podCreationTimestamp="2025-11-24 14:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:06:10.089241691 +0000 UTC m=+1.485038126" watchObservedRunningTime="2025-11-24 14:06:10.116508796 +0000 UTC m=+1.512305223"
	Nov 24 14:06:13 no-preload-694102 kubelet[2096]: I1124 14:06:13.267309    2096 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 14:06:13 no-preload-694102 kubelet[2096]: I1124 14:06:13.268998    2096 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 14:06:14 no-preload-694102 kubelet[2096]: I1124 14:06:14.618710    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/25554c4e-42ac-44c3-b789-ec859d575750-kube-proxy\") pod \"kube-proxy-zfqkk\" (UID: \"25554c4e-42ac-44c3-b789-ec859d575750\") " pod="kube-system/kube-proxy-zfqkk"
	Nov 24 14:06:14 no-preload-694102 kubelet[2096]: I1124 14:06:14.618752    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfx6m\" (UniqueName: \"kubernetes.io/projected/25554c4e-42ac-44c3-b789-ec859d575750-kube-api-access-hfx6m\") pod \"kube-proxy-zfqkk\" (UID: \"25554c4e-42ac-44c3-b789-ec859d575750\") " pod="kube-system/kube-proxy-zfqkk"
	Nov 24 14:06:14 no-preload-694102 kubelet[2096]: I1124 14:06:14.618786    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25554c4e-42ac-44c3-b789-ec859d575750-xtables-lock\") pod \"kube-proxy-zfqkk\" (UID: \"25554c4e-42ac-44c3-b789-ec859d575750\") " pod="kube-system/kube-proxy-zfqkk"
	Nov 24 14:06:14 no-preload-694102 kubelet[2096]: I1124 14:06:14.618804    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25554c4e-42ac-44c3-b789-ec859d575750-lib-modules\") pod \"kube-proxy-zfqkk\" (UID: \"25554c4e-42ac-44c3-b789-ec859d575750\") " pod="kube-system/kube-proxy-zfqkk"
	Nov 24 14:06:14 no-preload-694102 kubelet[2096]: I1124 14:06:14.721200    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a5b0f0c-0045-46ca-b1fb-3b810431b3c1-xtables-lock\") pod \"kindnet-7c59v\" (UID: \"8a5b0f0c-0045-46ca-b1fb-3b810431b3c1\") " pod="kube-system/kindnet-7c59v"
	Nov 24 14:06:14 no-preload-694102 kubelet[2096]: I1124 14:06:14.721280    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8a5b0f0c-0045-46ca-b1fb-3b810431b3c1-cni-cfg\") pod \"kindnet-7c59v\" (UID: \"8a5b0f0c-0045-46ca-b1fb-3b810431b3c1\") " pod="kube-system/kindnet-7c59v"
	Nov 24 14:06:14 no-preload-694102 kubelet[2096]: I1124 14:06:14.721334    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a5b0f0c-0045-46ca-b1fb-3b810431b3c1-lib-modules\") pod \"kindnet-7c59v\" (UID: \"8a5b0f0c-0045-46ca-b1fb-3b810431b3c1\") " pod="kube-system/kindnet-7c59v"
	Nov 24 14:06:14 no-preload-694102 kubelet[2096]: I1124 14:06:14.721354    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9jjk\" (UniqueName: \"kubernetes.io/projected/8a5b0f0c-0045-46ca-b1fb-3b810431b3c1-kube-api-access-s9jjk\") pod \"kindnet-7c59v\" (UID: \"8a5b0f0c-0045-46ca-b1fb-3b810431b3c1\") " pod="kube-system/kindnet-7c59v"
	Nov 24 14:06:14 no-preload-694102 kubelet[2096]: I1124 14:06:14.801400    2096 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 24 14:06:18 no-preload-694102 kubelet[2096]: I1124 14:06:18.303252    2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zfqkk" podStartSLOduration=4.303236146 podStartE2EDuration="4.303236146s" podCreationTimestamp="2025-11-24 14:06:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:06:16.346965982 +0000 UTC m=+7.742762417" watchObservedRunningTime="2025-11-24 14:06:18.303236146 +0000 UTC m=+9.699032581"
	Nov 24 14:06:18 no-preload-694102 kubelet[2096]: I1124 14:06:18.310195    2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-7c59v" podStartSLOduration=1.695427033 podStartE2EDuration="4.310174369s" podCreationTimestamp="2025-11-24 14:06:14 +0000 UTC" firstStartedPulling="2025-11-24 14:06:15.416181948 +0000 UTC m=+6.811978374" lastFinishedPulling="2025-11-24 14:06:18.030929283 +0000 UTC m=+9.426725710" observedRunningTime="2025-11-24 14:06:18.302994822 +0000 UTC m=+9.698791257" watchObservedRunningTime="2025-11-24 14:06:18.310174369 +0000 UTC m=+9.705970895"
	Nov 24 14:06:28 no-preload-694102 kubelet[2096]: I1124 14:06:28.586931    2096 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 14:06:28 no-preload-694102 kubelet[2096]: I1124 14:06:28.755823    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkg4l\" (UniqueName: \"kubernetes.io/projected/67b42ac3-2efb-4c75-bf32-69997934054f-kube-api-access-fkg4l\") pod \"coredns-66bc5c9577-mlv2v\" (UID: \"67b42ac3-2efb-4c75-bf32-69997934054f\") " pod="kube-system/coredns-66bc5c9577-mlv2v"
	Nov 24 14:06:28 no-preload-694102 kubelet[2096]: I1124 14:06:28.755890    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h9pk\" (UniqueName: \"kubernetes.io/projected/c7650495-d936-47cd-8950-19783ba64e6c-kube-api-access-5h9pk\") pod \"storage-provisioner\" (UID: \"c7650495-d936-47cd-8950-19783ba64e6c\") " pod="kube-system/storage-provisioner"
	Nov 24 14:06:28 no-preload-694102 kubelet[2096]: I1124 14:06:28.755980    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c7650495-d936-47cd-8950-19783ba64e6c-tmp\") pod \"storage-provisioner\" (UID: \"c7650495-d936-47cd-8950-19783ba64e6c\") " pod="kube-system/storage-provisioner"
	Nov 24 14:06:28 no-preload-694102 kubelet[2096]: I1124 14:06:28.756006    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67b42ac3-2efb-4c75-bf32-69997934054f-config-volume\") pod \"coredns-66bc5c9577-mlv2v\" (UID: \"67b42ac3-2efb-4c75-bf32-69997934054f\") " pod="kube-system/coredns-66bc5c9577-mlv2v"
	Nov 24 14:06:30 no-preload-694102 kubelet[2096]: I1124 14:06:30.413781    2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mlv2v" podStartSLOduration=16.413750336 podStartE2EDuration="16.413750336s" podCreationTimestamp="2025-11-24 14:06:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:06:30.388733818 +0000 UTC m=+21.784530253" watchObservedRunningTime="2025-11-24 14:06:30.413750336 +0000 UTC m=+21.809546771"
	Nov 24 14:06:32 no-preload-694102 kubelet[2096]: I1124 14:06:32.932232    2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.932197244 podStartE2EDuration="16.932197244s" podCreationTimestamp="2025-11-24 14:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:06:30.460851962 +0000 UTC m=+21.856648397" watchObservedRunningTime="2025-11-24 14:06:32.932197244 +0000 UTC m=+24.327993679"
	Nov 24 14:06:32 no-preload-694102 kubelet[2096]: I1124 14:06:32.996269    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-649th\" (UniqueName: \"kubernetes.io/projected/d1d404ac-cc11-4eb6-ae07-b81ddad14d37-kube-api-access-649th\") pod \"busybox\" (UID: \"d1d404ac-cc11-4eb6-ae07-b81ddad14d37\") " pod="default/busybox"
	Nov 24 14:06:36 no-preload-694102 kubelet[2096]: I1124 14:06:36.408097    2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.396836088 podStartE2EDuration="4.408065603s" podCreationTimestamp="2025-11-24 14:06:32 +0000 UTC" firstStartedPulling="2025-11-24 14:06:33.44908946 +0000 UTC m=+24.844885887" lastFinishedPulling="2025-11-24 14:06:35.460318975 +0000 UTC m=+26.856115402" observedRunningTime="2025-11-24 14:06:36.408027687 +0000 UTC m=+27.803824113" watchObservedRunningTime="2025-11-24 14:06:36.408065603 +0000 UTC m=+27.803862054"
	
	
	==> storage-provisioner [d6e8512e69a330afbd862c90d0ccc7512efbe7a0737efdd4514e6d7148ba97ae] <==
	I1124 14:06:29.416425       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 14:06:29.465817       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 14:06:29.465864       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 14:06:29.469664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:29.483419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:06:29.484563       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 14:06:29.484814       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-694102_9f969bd9-a57c-4e56-a57b-86eb25bac79b!
	I1124 14:06:29.486442       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"db805b77-a3d6-4c9c-95e0-c4ae98bdc958", APIVersion:"v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-694102_9f969bd9-a57c-4e56-a57b-86eb25bac79b became leader
	W1124 14:06:29.521780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:29.529675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:06:29.585206       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-694102_9f969bd9-a57c-4e56-a57b-86eb25bac79b!
	W1124 14:06:31.533821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:31.543170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:33.546631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:33.553454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:35.556559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:35.562384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:37.567220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:37.574963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:39.579511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:39.587475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:41.591272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:41.597239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:43.601075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:43.606484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-694102 -n no-preload-694102
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-694102 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-694102
helpers_test.go:243: (dbg) docker inspect no-preload-694102:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2919e7e2844d1cf44454f2c989bb893b3894a7f26ed969b74c6a6adfa629bed6",
	        "Created": "2025-11-24T14:05:20.101247347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 228306,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T14:05:20.202900361Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/2919e7e2844d1cf44454f2c989bb893b3894a7f26ed969b74c6a6adfa629bed6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2919e7e2844d1cf44454f2c989bb893b3894a7f26ed969b74c6a6adfa629bed6/hostname",
	        "HostsPath": "/var/lib/docker/containers/2919e7e2844d1cf44454f2c989bb893b3894a7f26ed969b74c6a6adfa629bed6/hosts",
	        "LogPath": "/var/lib/docker/containers/2919e7e2844d1cf44454f2c989bb893b3894a7f26ed969b74c6a6adfa629bed6/2919e7e2844d1cf44454f2c989bb893b3894a7f26ed969b74c6a6adfa629bed6-json.log",
	        "Name": "/no-preload-694102",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-694102:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-694102",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2919e7e2844d1cf44454f2c989bb893b3894a7f26ed969b74c6a6adfa629bed6",
	                "LowerDir": "/var/lib/docker/overlay2/2043cead9d249d0fcb074b26ff27da6d74e0f435b45dc87810eac93600e787e5-init/diff:/var/lib/docker/overlay2/f206897dad0d7c6b66379aa7c75402ab98ba158a4fc5aedf84eda3d57da10430/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2043cead9d249d0fcb074b26ff27da6d74e0f435b45dc87810eac93600e787e5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2043cead9d249d0fcb074b26ff27da6d74e0f435b45dc87810eac93600e787e5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2043cead9d249d0fcb074b26ff27da6d74e0f435b45dc87810eac93600e787e5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-694102",
	                "Source": "/var/lib/docker/volumes/no-preload-694102/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-694102",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-694102",
	                "name.minikube.sigs.k8s.io": "no-preload-694102",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fc348d81a08d06cdab27b864057fc0e4e77a5b6bf300294a793dbb2cfa2919b4",
	            "SandboxKey": "/var/run/docker/netns/fc348d81a08d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-694102": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:31:6e:4b:52:27",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "26d62b1d238cec67ff97d85579746f7a43022e393bcf007b8b06c40243c0378a",
	                    "EndpointID": "1f4bf4a4d9d9f8dd47a4ded5f04e54cada8aed3770c32f5ea7b33d19e75717b1",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-694102",
	                        "2919e7e2844d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-694102 -n no-preload-694102
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-694102 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-694102 logs -n 25: (1.588119832s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ addons  │ enable dashboard -p embed-certs-593634 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-593634           │ jenkins │ v1.37.0 │ 24 Nov 25 14:04 UTC │ 24 Nov 25 14:04 UTC │
	│ start   │ -p embed-certs-593634 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-593634           │ jenkins │ v1.37.0 │ 24 Nov 25 14:04 UTC │ 24 Nov 25 14:05 UTC │
	│ image   │ default-k8s-diff-port-609438 image list --format=json                                                                                                                                                                                               │ default-k8s-diff-port-609438 │ jenkins │ v1.37.0 │ 24 Nov 25 14:05 UTC │ 24 Nov 25 14:05 UTC │
	│ pause   │ -p default-k8s-diff-port-609438 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-609438 │ jenkins │ v1.37.0 │ 24 Nov 25 14:05 UTC │ 24 Nov 25 14:05 UTC │
	│ unpause │ -p default-k8s-diff-port-609438 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-609438 │ jenkins │ v1.37.0 │ 24 Nov 25 14:05 UTC │ 24 Nov 25 14:05 UTC │
	│ delete  │ -p default-k8s-diff-port-609438                                                                                                                                                                                                                     │ default-k8s-diff-port-609438 │ jenkins │ v1.37.0 │ 24 Nov 25 14:05 UTC │ 24 Nov 25 14:05 UTC │
	│ delete  │ -p default-k8s-diff-port-609438                                                                                                                                                                                                                     │ default-k8s-diff-port-609438 │ jenkins │ v1.37.0 │ 24 Nov 25 14:05 UTC │ 24 Nov 25 14:05 UTC │
	│ delete  │ -p disable-driver-mounts-073831                                                                                                                                                                                                                     │ disable-driver-mounts-073831 │ jenkins │ v1.37.0 │ 24 Nov 25 14:05 UTC │ 24 Nov 25 14:05 UTC │
	│ start   │ -p no-preload-694102 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-694102            │ jenkins │ v1.37.0 │ 24 Nov 25 14:05 UTC │ 24 Nov 25 14:06 UTC │
	│ image   │ embed-certs-593634 image list --format=json                                                                                                                                                                                                         │ embed-certs-593634           │ jenkins │ v1.37.0 │ 24 Nov 25 14:05 UTC │ 24 Nov 25 14:05 UTC │
	│ pause   │ -p embed-certs-593634 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-593634           │ jenkins │ v1.37.0 │ 24 Nov 25 14:05 UTC │ 24 Nov 25 14:05 UTC │
	│ unpause │ -p embed-certs-593634 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-593634           │ jenkins │ v1.37.0 │ 24 Nov 25 14:05 UTC │ 24 Nov 25 14:05 UTC │
	│ delete  │ -p embed-certs-593634                                                                                                                                                                                                                               │ embed-certs-593634           │ jenkins │ v1.37.0 │ 24 Nov 25 14:05 UTC │ 24 Nov 25 14:05 UTC │
	│ delete  │ -p embed-certs-593634                                                                                                                                                                                                                               │ embed-certs-593634           │ jenkins │ v1.37.0 │ 24 Nov 25 14:05 UTC │ 24 Nov 25 14:05 UTC │
	│ start   │ -p newest-cni-857121 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-857121            │ jenkins │ v1.37.0 │ 24 Nov 25 14:05 UTC │ 24 Nov 25 14:06 UTC │
	│ addons  │ enable metrics-server -p newest-cni-857121 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-857121            │ jenkins │ v1.37.0 │ 24 Nov 25 14:06 UTC │ 24 Nov 25 14:06 UTC │
	│ stop    │ -p newest-cni-857121 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-857121            │ jenkins │ v1.37.0 │ 24 Nov 25 14:06 UTC │ 24 Nov 25 14:06 UTC │
	│ addons  │ enable dashboard -p newest-cni-857121 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-857121            │ jenkins │ v1.37.0 │ 24 Nov 25 14:06 UTC │ 24 Nov 25 14:06 UTC │
	│ start   │ -p newest-cni-857121 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-857121            │ jenkins │ v1.37.0 │ 24 Nov 25 14:06 UTC │ 24 Nov 25 14:06 UTC │
	│ image   │ newest-cni-857121 image list --format=json                                                                                                                                                                                                          │ newest-cni-857121            │ jenkins │ v1.37.0 │ 24 Nov 25 14:06 UTC │ 24 Nov 25 14:06 UTC │
	│ pause   │ -p newest-cni-857121 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-857121            │ jenkins │ v1.37.0 │ 24 Nov 25 14:06 UTC │ 24 Nov 25 14:06 UTC │
	│ unpause │ -p newest-cni-857121 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-857121            │ jenkins │ v1.37.0 │ 24 Nov 25 14:06 UTC │ 24 Nov 25 14:06 UTC │
	│ delete  │ -p newest-cni-857121                                                                                                                                                                                                                                │ newest-cni-857121            │ jenkins │ v1.37.0 │ 24 Nov 25 14:06 UTC │ 24 Nov 25 14:06 UTC │
	│ delete  │ -p newest-cni-857121                                                                                                                                                                                                                                │ newest-cni-857121            │ jenkins │ v1.37.0 │ 24 Nov 25 14:06 UTC │ 24 Nov 25 14:06 UTC │
	│ start   │ -p auto-803934 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                       │ auto-803934                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:06 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:06:45
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:06:45.851545  239251 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:06:45.851760  239251 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:06:45.851782  239251 out.go:374] Setting ErrFile to fd 2...
	I1124 14:06:45.851804  239251 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:06:45.852129  239251 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2368/.minikube/bin
	I1124 14:06:45.852551  239251 out.go:368] Setting JSON to false
	I1124 14:06:45.853507  239251 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6555,"bootTime":1763986651,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1124 14:06:45.853592  239251 start.go:143] virtualization:  
	I1124 14:06:45.857720  239251 out.go:179] * [auto-803934] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 14:06:45.861054  239251 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:06:45.861215  239251 notify.go:221] Checking for updates...
	I1124 14:06:45.867494  239251 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:06:45.875744  239251 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2368/kubeconfig
	I1124 14:06:45.879938  239251 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2368/.minikube
	I1124 14:06:45.882862  239251 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 14:06:45.885839  239251 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	163eb87995029       1611cd07b61d5       11 seconds ago      Running             busybox                   0                   51762fb9df68c       busybox                                     default
	946f57c26f2b7       138784d87c9c5       17 seconds ago      Running             coredns                   0                   08de473ffdbeb       coredns-66bc5c9577-mlv2v                    kube-system
	d6e8512e69a33       66749159455b3       17 seconds ago      Running             storage-provisioner       0                   61b24dad847bd       storage-provisioner                         kube-system
	a1db3bb697698       b1a8c6f707935       28 seconds ago      Running             kindnet-cni               0                   a63d18f034f26       kindnet-7c59v                               kube-system
	9d3e142da2f58       05baa95f5142d       31 seconds ago      Running             kube-proxy                0                   0e9a45f087804       kube-proxy-zfqkk                            kube-system
	e292eee04d520       a1894772a478e       49 seconds ago      Running             etcd                      0                   b465546117c08       etcd-no-preload-694102                      kube-system
	3419b4aa1d824       7eb2c6ff0c5a7       49 seconds ago      Running             kube-controller-manager   0                   573b593651283       kube-controller-manager-no-preload-694102   kube-system
	87cd43fd53099       b5f57ec6b9867       49 seconds ago      Running             kube-scheduler            0                   44c7f6f32a448       kube-scheduler-no-preload-694102            kube-system
	6091a6176c58f       43911e833d64d       50 seconds ago      Running             kube-apiserver            0                   ba0521ba90d26       kube-apiserver-no-preload-694102            kube-system
	
	
	==> containerd <==
	Nov 24 14:06:29 no-preload-694102 containerd[757]: time="2025-11-24T14:06:29.228719962Z" level=info msg="CreateContainer within sandbox \"61b24dad847bd3f04d6774832019e6baba904f093144a79f4b695b0860665431\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"d6e8512e69a330afbd862c90d0ccc7512efbe7a0737efdd4514e6d7148ba97ae\""
	Nov 24 14:06:29 no-preload-694102 containerd[757]: time="2025-11-24T14:06:29.229898953Z" level=info msg="StartContainer for \"d6e8512e69a330afbd862c90d0ccc7512efbe7a0737efdd4514e6d7148ba97ae\""
	Nov 24 14:06:29 no-preload-694102 containerd[757]: time="2025-11-24T14:06:29.231645958Z" level=info msg="connecting to shim d6e8512e69a330afbd862c90d0ccc7512efbe7a0737efdd4514e6d7148ba97ae" address="unix:///run/containerd/s/4f24a9ca1c31e9bc5f10c93738b3c2d5d3840ca2e0d860b65aff54598bc0df09" protocol=ttrpc version=3
	Nov 24 14:06:29 no-preload-694102 containerd[757]: time="2025-11-24T14:06:29.234632763Z" level=info msg="Container 946f57c26f2b7fb53897eb22e9eff868d9bd49eab227874672294881b6323429: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 14:06:29 no-preload-694102 containerd[757]: time="2025-11-24T14:06:29.269232194Z" level=info msg="CreateContainer within sandbox \"08de473ffdbeba53b447ef018345568dae4703c451627d915b546f132d6deda7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"946f57c26f2b7fb53897eb22e9eff868d9bd49eab227874672294881b6323429\""
	Nov 24 14:06:29 no-preload-694102 containerd[757]: time="2025-11-24T14:06:29.272948352Z" level=info msg="StartContainer for \"946f57c26f2b7fb53897eb22e9eff868d9bd49eab227874672294881b6323429\""
	Nov 24 14:06:29 no-preload-694102 containerd[757]: time="2025-11-24T14:06:29.274534157Z" level=info msg="connecting to shim 946f57c26f2b7fb53897eb22e9eff868d9bd49eab227874672294881b6323429" address="unix:///run/containerd/s/6c27632380d6ee7274590d004e922064863ee7fd90097024b8a099e5a6ba2f0c" protocol=ttrpc version=3
	Nov 24 14:06:29 no-preload-694102 containerd[757]: time="2025-11-24T14:06:29.407024789Z" level=info msg="StartContainer for \"d6e8512e69a330afbd862c90d0ccc7512efbe7a0737efdd4514e6d7148ba97ae\" returns successfully"
	Nov 24 14:06:29 no-preload-694102 containerd[757]: time="2025-11-24T14:06:29.447423575Z" level=info msg="StartContainer for \"946f57c26f2b7fb53897eb22e9eff868d9bd49eab227874672294881b6323429\" returns successfully"
	Nov 24 14:06:33 no-preload-694102 containerd[757]: time="2025-11-24T14:06:33.246056269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:d1d404ac-cc11-4eb6-ae07-b81ddad14d37,Namespace:default,Attempt:0,}"
	Nov 24 14:06:33 no-preload-694102 containerd[757]: time="2025-11-24T14:06:33.316532968Z" level=info msg="connecting to shim 51762fb9df68c598b0e8812b7b41ae5270359bef99e2bc2977d7786d7a7ac141" address="unix:///run/containerd/s/814dfc9f9acc2c18d7e8ab78382f15fd7334ca370b0c2367223d84c1c086a794" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 14:06:33 no-preload-694102 containerd[757]: time="2025-11-24T14:06:33.447537780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:d1d404ac-cc11-4eb6-ae07-b81ddad14d37,Namespace:default,Attempt:0,} returns sandbox id \"51762fb9df68c598b0e8812b7b41ae5270359bef99e2bc2977d7786d7a7ac141\""
	Nov 24 14:06:33 no-preload-694102 containerd[757]: time="2025-11-24T14:06:33.449649942Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 14:06:35 no-preload-694102 containerd[757]: time="2025-11-24T14:06:35.450478699Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 14:06:35 no-preload-694102 containerd[757]: time="2025-11-24T14:06:35.452455130Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937184"
	Nov 24 14:06:35 no-preload-694102 containerd[757]: time="2025-11-24T14:06:35.455231881Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 14:06:35 no-preload-694102 containerd[757]: time="2025-11-24T14:06:35.458537254Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 14:06:35 no-preload-694102 containerd[757]: time="2025-11-24T14:06:35.459425747Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.009608222s"
	Nov 24 14:06:35 no-preload-694102 containerd[757]: time="2025-11-24T14:06:35.459469719Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 24 14:06:35 no-preload-694102 containerd[757]: time="2025-11-24T14:06:35.472761059Z" level=info msg="CreateContainer within sandbox \"51762fb9df68c598b0e8812b7b41ae5270359bef99e2bc2977d7786d7a7ac141\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 14:06:35 no-preload-694102 containerd[757]: time="2025-11-24T14:06:35.498978221Z" level=info msg="Container 163eb87995029406ede5b540330a113fb481fc9dfb6cef1798d33c49676a6800: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 14:06:35 no-preload-694102 containerd[757]: time="2025-11-24T14:06:35.513700387Z" level=info msg="CreateContainer within sandbox \"51762fb9df68c598b0e8812b7b41ae5270359bef99e2bc2977d7786d7a7ac141\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"163eb87995029406ede5b540330a113fb481fc9dfb6cef1798d33c49676a6800\""
	Nov 24 14:06:35 no-preload-694102 containerd[757]: time="2025-11-24T14:06:35.516916314Z" level=info msg="StartContainer for \"163eb87995029406ede5b540330a113fb481fc9dfb6cef1798d33c49676a6800\""
	Nov 24 14:06:35 no-preload-694102 containerd[757]: time="2025-11-24T14:06:35.519991366Z" level=info msg="connecting to shim 163eb87995029406ede5b540330a113fb481fc9dfb6cef1798d33c49676a6800" address="unix:///run/containerd/s/814dfc9f9acc2c18d7e8ab78382f15fd7334ca370b0c2367223d84c1c086a794" protocol=ttrpc version=3
	Nov 24 14:06:35 no-preload-694102 containerd[757]: time="2025-11-24T14:06:35.607548769Z" level=info msg="StartContainer for \"163eb87995029406ede5b540330a113fb481fc9dfb6cef1798d33c49676a6800\" returns successfully"
	
	
	==> coredns [946f57c26f2b7fb53897eb22e9eff868d9bd49eab227874672294881b6323429] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53191 - 63475 "HINFO IN 3638190689505453823.4999176802452857747. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.052472119s
	
	
	==> describe nodes <==
	Name:               no-preload-694102
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-694102
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=no-preload-694102
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T14_06_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 14:06:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-694102
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:06:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:06:39 +0000   Mon, 24 Nov 2025 14:05:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:06:39 +0000   Mon, 24 Nov 2025 14:05:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:06:39 +0000   Mon, 24 Nov 2025 14:05:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 14:06:39 +0000   Mon, 24 Nov 2025 14:06:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-694102
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                2162c161-b033-4214-a405-fa28dbb15d11
	  Boot ID:                    dd480c26-e101-4930-b98c-54c06b430fdc
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  kube-system                 coredns-66bc5c9577-mlv2v                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     32s
	  kube-system                 etcd-no-preload-694102                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         40s
	  kube-system                 kindnet-7c59v                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      32s
	  kube-system                 kube-apiserver-no-preload-694102             250m (12%)    0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-controller-manager-no-preload-694102    200m (10%)    0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-proxy-zfqkk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-scheduler-no-preload-694102             100m (5%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 30s                kube-proxy       
	  Normal   NodeHasSufficientMemory  51s (x8 over 51s)  kubelet          Node no-preload-694102 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    51s (x8 over 51s)  kubelet          Node no-preload-694102 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     51s (x7 over 51s)  kubelet          Node no-preload-694102 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  51s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 38s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 38s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  37s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  37s                kubelet          Node no-preload-694102 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    37s                kubelet          Node no-preload-694102 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     37s                kubelet          Node no-preload-694102 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           33s                node-controller  Node no-preload-694102 event: Registered Node no-preload-694102 in Controller
	  Normal   NodeReady                18s                kubelet          Node no-preload-694102 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 12:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014697] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497291] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033884] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.804993] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.476130] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [e292eee04d5207d003d6a15401217d93e23a551eb753118c2698e918a79b2404] <==
	{"level":"warn","ts":"2025-11-24T14:06:01.850383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:01.917742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:01.951423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.008343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.084222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.124902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.160242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.215295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.259656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.286574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.328478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.350931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.392727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.458974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.481369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.517524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.543396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.581606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.606525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.625156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.641987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.698898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.738006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:02.768554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:06:03.016188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41272","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:06:47 up  1:49,  0 user,  load average: 5.58, 4.40, 3.48
	Linux no-preload-694102 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a1db3bb6976980eb84e4914f2a03aa12ecd36ce023d338d485de53742975835f] <==
	I1124 14:06:18.369066       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:06:18.369776       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 14:06:18.369997       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:06:18.370072       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:06:18.370231       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:06:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:06:18.576708       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:06:18.576820       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:06:18.576852       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:06:18.579759       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 14:06:18.777009       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 14:06:18.777261       1 metrics.go:72] Registering metrics
	I1124 14:06:18.777417       1 controller.go:711] "Syncing nftables rules"
	I1124 14:06:28.580120       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:06:28.580178       1 main.go:301] handling current node
	I1124 14:06:38.572127       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:06:38.572174       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6091a6176c58febadbf0290401cf3ff41e3afe336a9afda7a4379706800fe9d2] <==
	I1124 14:06:05.152804       1 policy_source.go:240] refreshing policies
	I1124 14:06:05.219289       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 14:06:05.328715       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 14:06:05.370104       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:06:05.376417       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 14:06:05.428643       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:06:05.444557       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 14:06:05.535899       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 14:06:05.609886       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 14:06:05.612755       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:06:07.268321       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 14:06:07.342975       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 14:06:07.481991       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 14:06:07.490174       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1124 14:06:07.491311       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 14:06:07.499394       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:06:08.087414       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 14:06:08.656785       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 14:06:08.682787       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 14:06:08.699685       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 14:06:13.955610       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:06:13.962571       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:06:14.208905       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 14:06:14.235146       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1124 14:06:42.261456       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:36424: use of closed network connection
	
	
	==> kube-controller-manager [3419b4aa1d8248de1aad4e86c2bc5857b2b75b353e5d4b5e4c6255159c6a872f] <==
	I1124 14:06:13.145797       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 14:06:13.147890       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 14:06:13.148148       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:06:13.148302       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 14:06:13.153366       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 14:06:13.154706       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 14:06:13.170298       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 14:06:13.176906       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:06:13.184348       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 14:06:13.189127       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 14:06:13.189747       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 14:06:13.190844       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 14:06:13.192395       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 14:06:13.195696       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:06:13.201214       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 14:06:13.211263       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 14:06:13.216173       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 14:06:13.216353       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 14:06:13.217106       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 14:06:13.217278       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 14:06:13.218571       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-694102" podCIDRs=["10.244.0.0/24"]
	I1124 14:06:13.230896       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:06:13.231102       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 14:06:13.231188       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 14:06:33.139882       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [9d3e142da2f58c9d760f3cde2d16603ca77a56cb78d10aaede8c43b7bc25d147] <==
	I1124 14:06:15.869803       1 server_linux.go:53] "Using iptables proxy"
	I1124 14:06:15.982566       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 14:06:16.084149       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 14:06:16.084189       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 14:06:16.084260       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 14:06:16.203193       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:06:16.203317       1 server_linux.go:132] "Using iptables Proxier"
	I1124 14:06:16.214958       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 14:06:16.215358       1 server.go:527] "Version info" version="v1.34.1"
	I1124 14:06:16.216542       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:06:16.219871       1 config.go:200] "Starting service config controller"
	I1124 14:06:16.219887       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 14:06:16.220023       1 config.go:106] "Starting endpoint slice config controller"
	I1124 14:06:16.220031       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 14:06:16.220048       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 14:06:16.220052       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 14:06:16.220678       1 config.go:309] "Starting node config controller"
	I1124 14:06:16.220686       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 14:06:16.220693       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 14:06:16.323563       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 14:06:16.323598       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 14:06:16.323657       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [87cd43fd5309925d217afbff2150e4e6af9a87b2a39838ad50ec55dd2be7ae20] <==
	E1124 14:06:05.298828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 14:06:05.299023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 14:06:05.299209       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 14:06:05.299594       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 14:06:05.316238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 14:06:06.185004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1124 14:06:06.193524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 14:06:06.222055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 14:06:06.245626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 14:06:06.253861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 14:06:06.294134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 14:06:06.307324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 14:06:06.377174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 14:06:06.416310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 14:06:06.462031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 14:06:06.472738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 14:06:06.505458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 14:06:06.505677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 14:06:06.511747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 14:06:06.545776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 14:06:06.632366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 14:06:06.794348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 14:06:06.850604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 14:06:06.904051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1124 14:06:08.428823       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 14:06:10 no-preload-694102 kubelet[2096]: I1124 14:06:10.048564    2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-694102" podStartSLOduration=4.048544985 podStartE2EDuration="4.048544985s" podCreationTimestamp="2025-11-24 14:06:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:06:09.965556004 +0000 UTC m=+1.361352439" watchObservedRunningTime="2025-11-24 14:06:10.048544985 +0000 UTC m=+1.444341412"
	Nov 24 14:06:10 no-preload-694102 kubelet[2096]: I1124 14:06:10.088801    2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-694102" podStartSLOduration=1.088782215 podStartE2EDuration="1.088782215s" podCreationTimestamp="2025-11-24 14:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:06:10.067685793 +0000 UTC m=+1.463482219" watchObservedRunningTime="2025-11-24 14:06:10.088782215 +0000 UTC m=+1.484578650"
	Nov 24 14:06:10 no-preload-694102 kubelet[2096]: I1124 14:06:10.116526    2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-694102" podStartSLOduration=1.116508796 podStartE2EDuration="1.116508796s" podCreationTimestamp="2025-11-24 14:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:06:10.089241691 +0000 UTC m=+1.485038126" watchObservedRunningTime="2025-11-24 14:06:10.116508796 +0000 UTC m=+1.512305223"
	Nov 24 14:06:13 no-preload-694102 kubelet[2096]: I1124 14:06:13.267309    2096 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 14:06:13 no-preload-694102 kubelet[2096]: I1124 14:06:13.268998    2096 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 14:06:14 no-preload-694102 kubelet[2096]: I1124 14:06:14.618710    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/25554c4e-42ac-44c3-b789-ec859d575750-kube-proxy\") pod \"kube-proxy-zfqkk\" (UID: \"25554c4e-42ac-44c3-b789-ec859d575750\") " pod="kube-system/kube-proxy-zfqkk"
	Nov 24 14:06:14 no-preload-694102 kubelet[2096]: I1124 14:06:14.618752    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfx6m\" (UniqueName: \"kubernetes.io/projected/25554c4e-42ac-44c3-b789-ec859d575750-kube-api-access-hfx6m\") pod \"kube-proxy-zfqkk\" (UID: \"25554c4e-42ac-44c3-b789-ec859d575750\") " pod="kube-system/kube-proxy-zfqkk"
	Nov 24 14:06:14 no-preload-694102 kubelet[2096]: I1124 14:06:14.618786    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25554c4e-42ac-44c3-b789-ec859d575750-xtables-lock\") pod \"kube-proxy-zfqkk\" (UID: \"25554c4e-42ac-44c3-b789-ec859d575750\") " pod="kube-system/kube-proxy-zfqkk"
	Nov 24 14:06:14 no-preload-694102 kubelet[2096]: I1124 14:06:14.618804    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25554c4e-42ac-44c3-b789-ec859d575750-lib-modules\") pod \"kube-proxy-zfqkk\" (UID: \"25554c4e-42ac-44c3-b789-ec859d575750\") " pod="kube-system/kube-proxy-zfqkk"
	Nov 24 14:06:14 no-preload-694102 kubelet[2096]: I1124 14:06:14.721200    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a5b0f0c-0045-46ca-b1fb-3b810431b3c1-xtables-lock\") pod \"kindnet-7c59v\" (UID: \"8a5b0f0c-0045-46ca-b1fb-3b810431b3c1\") " pod="kube-system/kindnet-7c59v"
	Nov 24 14:06:14 no-preload-694102 kubelet[2096]: I1124 14:06:14.721280    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8a5b0f0c-0045-46ca-b1fb-3b810431b3c1-cni-cfg\") pod \"kindnet-7c59v\" (UID: \"8a5b0f0c-0045-46ca-b1fb-3b810431b3c1\") " pod="kube-system/kindnet-7c59v"
	Nov 24 14:06:14 no-preload-694102 kubelet[2096]: I1124 14:06:14.721334    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a5b0f0c-0045-46ca-b1fb-3b810431b3c1-lib-modules\") pod \"kindnet-7c59v\" (UID: \"8a5b0f0c-0045-46ca-b1fb-3b810431b3c1\") " pod="kube-system/kindnet-7c59v"
	Nov 24 14:06:14 no-preload-694102 kubelet[2096]: I1124 14:06:14.721354    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9jjk\" (UniqueName: \"kubernetes.io/projected/8a5b0f0c-0045-46ca-b1fb-3b810431b3c1-kube-api-access-s9jjk\") pod \"kindnet-7c59v\" (UID: \"8a5b0f0c-0045-46ca-b1fb-3b810431b3c1\") " pod="kube-system/kindnet-7c59v"
	Nov 24 14:06:14 no-preload-694102 kubelet[2096]: I1124 14:06:14.801400    2096 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 24 14:06:18 no-preload-694102 kubelet[2096]: I1124 14:06:18.303252    2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zfqkk" podStartSLOduration=4.303236146 podStartE2EDuration="4.303236146s" podCreationTimestamp="2025-11-24 14:06:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:06:16.346965982 +0000 UTC m=+7.742762417" watchObservedRunningTime="2025-11-24 14:06:18.303236146 +0000 UTC m=+9.699032581"
	Nov 24 14:06:18 no-preload-694102 kubelet[2096]: I1124 14:06:18.310195    2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-7c59v" podStartSLOduration=1.695427033 podStartE2EDuration="4.310174369s" podCreationTimestamp="2025-11-24 14:06:14 +0000 UTC" firstStartedPulling="2025-11-24 14:06:15.416181948 +0000 UTC m=+6.811978374" lastFinishedPulling="2025-11-24 14:06:18.030929283 +0000 UTC m=+9.426725710" observedRunningTime="2025-11-24 14:06:18.302994822 +0000 UTC m=+9.698791257" watchObservedRunningTime="2025-11-24 14:06:18.310174369 +0000 UTC m=+9.705970895"
	Nov 24 14:06:28 no-preload-694102 kubelet[2096]: I1124 14:06:28.586931    2096 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 14:06:28 no-preload-694102 kubelet[2096]: I1124 14:06:28.755823    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkg4l\" (UniqueName: \"kubernetes.io/projected/67b42ac3-2efb-4c75-bf32-69997934054f-kube-api-access-fkg4l\") pod \"coredns-66bc5c9577-mlv2v\" (UID: \"67b42ac3-2efb-4c75-bf32-69997934054f\") " pod="kube-system/coredns-66bc5c9577-mlv2v"
	Nov 24 14:06:28 no-preload-694102 kubelet[2096]: I1124 14:06:28.755890    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h9pk\" (UniqueName: \"kubernetes.io/projected/c7650495-d936-47cd-8950-19783ba64e6c-kube-api-access-5h9pk\") pod \"storage-provisioner\" (UID: \"c7650495-d936-47cd-8950-19783ba64e6c\") " pod="kube-system/storage-provisioner"
	Nov 24 14:06:28 no-preload-694102 kubelet[2096]: I1124 14:06:28.755980    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c7650495-d936-47cd-8950-19783ba64e6c-tmp\") pod \"storage-provisioner\" (UID: \"c7650495-d936-47cd-8950-19783ba64e6c\") " pod="kube-system/storage-provisioner"
	Nov 24 14:06:28 no-preload-694102 kubelet[2096]: I1124 14:06:28.756006    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67b42ac3-2efb-4c75-bf32-69997934054f-config-volume\") pod \"coredns-66bc5c9577-mlv2v\" (UID: \"67b42ac3-2efb-4c75-bf32-69997934054f\") " pod="kube-system/coredns-66bc5c9577-mlv2v"
	Nov 24 14:06:30 no-preload-694102 kubelet[2096]: I1124 14:06:30.413781    2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mlv2v" podStartSLOduration=16.413750336 podStartE2EDuration="16.413750336s" podCreationTimestamp="2025-11-24 14:06:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:06:30.388733818 +0000 UTC m=+21.784530253" watchObservedRunningTime="2025-11-24 14:06:30.413750336 +0000 UTC m=+21.809546771"
	Nov 24 14:06:32 no-preload-694102 kubelet[2096]: I1124 14:06:32.932232    2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.932197244 podStartE2EDuration="16.932197244s" podCreationTimestamp="2025-11-24 14:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:06:30.460851962 +0000 UTC m=+21.856648397" watchObservedRunningTime="2025-11-24 14:06:32.932197244 +0000 UTC m=+24.327993679"
	Nov 24 14:06:32 no-preload-694102 kubelet[2096]: I1124 14:06:32.996269    2096 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-649th\" (UniqueName: \"kubernetes.io/projected/d1d404ac-cc11-4eb6-ae07-b81ddad14d37-kube-api-access-649th\") pod \"busybox\" (UID: \"d1d404ac-cc11-4eb6-ae07-b81ddad14d37\") " pod="default/busybox"
	Nov 24 14:06:36 no-preload-694102 kubelet[2096]: I1124 14:06:36.408097    2096 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.396836088 podStartE2EDuration="4.408065603s" podCreationTimestamp="2025-11-24 14:06:32 +0000 UTC" firstStartedPulling="2025-11-24 14:06:33.44908946 +0000 UTC m=+24.844885887" lastFinishedPulling="2025-11-24 14:06:35.460318975 +0000 UTC m=+26.856115402" observedRunningTime="2025-11-24 14:06:36.408027687 +0000 UTC m=+27.803824113" watchObservedRunningTime="2025-11-24 14:06:36.408065603 +0000 UTC m=+27.803862054"
	
	
	==> storage-provisioner [d6e8512e69a330afbd862c90d0ccc7512efbe7a0737efdd4514e6d7148ba97ae] <==
	I1124 14:06:29.465864       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 14:06:29.469664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:29.483419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:06:29.484563       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 14:06:29.484814       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-694102_9f969bd9-a57c-4e56-a57b-86eb25bac79b!
	I1124 14:06:29.486442       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"db805b77-a3d6-4c9c-95e0-c4ae98bdc958", APIVersion:"v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-694102_9f969bd9-a57c-4e56-a57b-86eb25bac79b became leader
	W1124 14:06:29.521780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:29.529675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:06:29.585206       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-694102_9f969bd9-a57c-4e56-a57b-86eb25bac79b!
	W1124 14:06:31.533821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:31.543170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:33.546631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:33.553454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:35.556559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:35.562384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:37.567220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:37.574963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:39.579511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:39.587475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:41.591272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:41.597239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:43.601075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:43.606484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:45.621523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:06:45.635255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-694102 -n no-preload-694102
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-694102 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (15.39s)

                                                
                                    

Test pass (299/333)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 8.92
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.31
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 9.26
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.12
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.63
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 165.32
29 TestAddons/serial/Volcano 40.87
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 9.87
35 TestAddons/parallel/Registry 17.16
36 TestAddons/parallel/RegistryCreds 0.88
37 TestAddons/parallel/Ingress 20.18
38 TestAddons/parallel/InspektorGadget 11.82
39 TestAddons/parallel/MetricsServer 6.97
41 TestAddons/parallel/CSI 44.49
42 TestAddons/parallel/Headlamp 17.24
43 TestAddons/parallel/CloudSpanner 6.6
44 TestAddons/parallel/LocalPath 51.04
45 TestAddons/parallel/NvidiaDevicePlugin 6.52
46 TestAddons/parallel/Yakd 11.81
48 TestAddons/StoppedEnableDisable 12.37
49 TestCertOptions 36.78
50 TestCertExpiration 235.85
52 TestForceSystemdFlag 44.58
53 TestForceSystemdEnv 47.43
54 TestDockerEnvContainerd 48.45
58 TestErrorSpam/setup 33.44
59 TestErrorSpam/start 1.01
60 TestErrorSpam/status 1.34
61 TestErrorSpam/pause 1.83
62 TestErrorSpam/unpause 1.81
63 TestErrorSpam/stop 1.59
66 TestFunctional/serial/CopySyncFile 0.01
67 TestFunctional/serial/StartWithProxy 82.22
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 7.07
70 TestFunctional/serial/KubeContext 0.07
71 TestFunctional/serial/KubectlGetPods 0.12
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.4
75 TestFunctional/serial/CacheCmd/cache/add_local 1.17
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.92
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 44.3
84 TestFunctional/serial/ComponentHealth 0.11
85 TestFunctional/serial/LogsCmd 1.49
86 TestFunctional/serial/LogsFileCmd 1.51
87 TestFunctional/serial/InvalidService 4.7
89 TestFunctional/parallel/ConfigCmd 0.46
90 TestFunctional/parallel/DashboardCmd 6.85
91 TestFunctional/parallel/DryRun 0.46
92 TestFunctional/parallel/InternationalLanguage 0.18
93 TestFunctional/parallel/StatusCmd 1.43
97 TestFunctional/parallel/ServiceCmdConnect 8.78
98 TestFunctional/parallel/AddonsCmd 0.19
99 TestFunctional/parallel/PersistentVolumeClaim 26.43
101 TestFunctional/parallel/SSHCmd 0.73
102 TestFunctional/parallel/CpCmd 2.35
104 TestFunctional/parallel/FileSync 0.36
105 TestFunctional/parallel/CertSync 2.23
109 TestFunctional/parallel/NodeLabels 0.1
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.71
113 TestFunctional/parallel/License 0.31
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.77
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.42
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 7.27
126 TestFunctional/parallel/ServiceCmd/List 0.53
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.62
128 TestFunctional/parallel/ProfileCmd/profile_not_create 0.53
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
130 TestFunctional/parallel/ProfileCmd/profile_list 0.57
131 TestFunctional/parallel/ServiceCmd/Format 0.51
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.54
133 TestFunctional/parallel/ServiceCmd/URL 0.59
134 TestFunctional/parallel/MountCmd/any-port 8.07
135 TestFunctional/parallel/MountCmd/specific-port 2.38
136 TestFunctional/parallel/Version/short 0.07
137 TestFunctional/parallel/Version/components 1.42
138 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
139 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
140 TestFunctional/parallel/ImageCommands/ImageListJson 0.72
141 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
142 TestFunctional/parallel/ImageCommands/ImageBuild 4.43
143 TestFunctional/parallel/ImageCommands/Setup 0.74
144 TestFunctional/parallel/MountCmd/VerifyCleanup 2.71
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.32
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.26
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.24
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.49
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.48
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.66
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.46
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 183.07
163 TestMultiControlPlane/serial/DeployApp 6.94
164 TestMultiControlPlane/serial/PingHostFromPods 1.69
165 TestMultiControlPlane/serial/AddWorkerNode 61.27
166 TestMultiControlPlane/serial/NodeLabels 0.13
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.08
168 TestMultiControlPlane/serial/CopyFile 20.49
169 TestMultiControlPlane/serial/StopSecondaryNode 13
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.84
171 TestMultiControlPlane/serial/RestartSecondaryNode 13.08
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.5
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 99.05
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.84
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.79
176 TestMultiControlPlane/serial/StopCluster 36.42
177 TestMultiControlPlane/serial/RestartCluster 60.49
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.88
179 TestMultiControlPlane/serial/AddSecondaryNode 82.42
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.08
185 TestJSONOutput/start/Command 53.51
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.73
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.62
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 1.47
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.25
210 TestKicCustomNetwork/create_custom_network 48.14
211 TestKicCustomNetwork/use_default_bridge_network 36.22
212 TestKicExistingNetwork 35.4
213 TestKicCustomSubnet 36.32
214 TestKicStaticIP 37.39
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 67.4
219 TestMountStart/serial/StartWithMountFirst 9.01
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 6.2
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.73
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.28
226 TestMountStart/serial/RestartStopped 8.1
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 108.79
231 TestMultiNode/serial/DeployApp2Nodes 5.4
232 TestMultiNode/serial/PingHostFrom2Pods 0.99
233 TestMultiNode/serial/AddNode 28.19
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.71
236 TestMultiNode/serial/CopyFile 10.59
237 TestMultiNode/serial/StopNode 2.45
238 TestMultiNode/serial/StartAfterStop 8.02
239 TestMultiNode/serial/RestartKeepsNodes 73.13
240 TestMultiNode/serial/DeleteNode 5.74
241 TestMultiNode/serial/StopMultiNode 24.14
242 TestMultiNode/serial/RestartMultiNode 49.41
243 TestMultiNode/serial/ValidateNameConflict 36.59
248 TestPreload 120.76
250 TestScheduledStopUnix 112.02
253 TestInsufficientStorage 13.04
254 TestRunningBinaryUpgrade 70.06
256 TestKubernetesUpgrade 361.88
257 TestMissingContainerUpgrade 141.02
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
260 TestNoKubernetes/serial/StartWithK8s 39.18
261 TestNoKubernetes/serial/StartWithStopK8s 17.6
262 TestNoKubernetes/serial/Start 10.05
263 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.43
265 TestNoKubernetes/serial/ProfileList 1.71
266 TestNoKubernetes/serial/Stop 1.46
267 TestNoKubernetes/serial/StartNoArgs 6.56
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
269 TestStoppedBinaryUpgrade/Setup 2.99
270 TestStoppedBinaryUpgrade/Upgrade 55.65
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.42
280 TestPause/serial/Start 82.13
281 TestPause/serial/SecondStartNoReconfiguration 6.55
282 TestPause/serial/Pause 0.72
283 TestPause/serial/VerifyStatus 0.41
284 TestPause/serial/Unpause 0.83
285 TestPause/serial/PauseAgain 0.84
286 TestPause/serial/DeletePaused 2.85
287 TestPause/serial/VerifyDeletedResources 0.39
295 TestNetworkPlugins/group/false 5.69
300 TestStartStop/group/old-k8s-version/serial/FirstStart 59.88
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.21
303 TestStartStop/group/old-k8s-version/serial/Stop 12.14
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
305 TestStartStop/group/old-k8s-version/serial/SecondStart 49.08
306 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
307 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.1
308 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
309 TestStartStop/group/old-k8s-version/serial/Pause 3.92
311 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 82.55
313 TestStartStop/group/embed-certs/serial/FirstStart 86.57
316 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.19
317 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.2
318 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.02
319 TestStartStop/group/embed-certs/serial/Stop 12.58
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.3
321 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 52.39
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
323 TestStartStop/group/embed-certs/serial/SecondStart 52.56
324 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
325 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
326 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
327 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.35
328 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
330 TestStartStop/group/no-preload/serial/FirstStart 73.82
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
333 TestStartStop/group/embed-certs/serial/Pause 4.12
335 TestStartStop/group/newest-cni/serial/FirstStart 47.72
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.08
338 TestStartStop/group/newest-cni/serial/Stop 1.37
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
340 TestStartStop/group/newest-cni/serial/SecondStart 16.41
342 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
345 TestStartStop/group/newest-cni/serial/Pause 3.34
346 TestNetworkPlugins/group/auto/Start 90.15
347 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.42
348 TestStartStop/group/no-preload/serial/Stop 12.35
349 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.25
350 TestStartStop/group/no-preload/serial/SecondStart 56.39
351 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
352 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
353 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
354 TestStartStop/group/no-preload/serial/Pause 3.35
355 TestNetworkPlugins/group/auto/KubeletFlags 0.38
356 TestNetworkPlugins/group/auto/NetCatPod 10.4
357 TestNetworkPlugins/group/kindnet/Start 90.39
358 TestNetworkPlugins/group/auto/DNS 0.24
359 TestNetworkPlugins/group/auto/Localhost 0.18
360 TestNetworkPlugins/group/auto/HairPin 0.19
361 TestNetworkPlugins/group/flannel/Start 61.63
362 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
363 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
364 TestNetworkPlugins/group/kindnet/NetCatPod 10.3
365 TestNetworkPlugins/group/flannel/ControllerPod 6.01
366 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
367 TestNetworkPlugins/group/flannel/NetCatPod 10.27
368 TestNetworkPlugins/group/kindnet/DNS 0.22
369 TestNetworkPlugins/group/kindnet/Localhost 0.25
370 TestNetworkPlugins/group/kindnet/HairPin 0.24
371 TestNetworkPlugins/group/flannel/DNS 0.23
372 TestNetworkPlugins/group/flannel/Localhost 0.23
373 TestNetworkPlugins/group/flannel/HairPin 0.29
374 TestNetworkPlugins/group/enable-default-cni/Start 56.89
375 TestNetworkPlugins/group/bridge/Start 51.04
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.28
378 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
379 TestNetworkPlugins/group/bridge/NetCatPod 9.31
380 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
381 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
382 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
383 TestNetworkPlugins/group/bridge/DNS 0.21
384 TestNetworkPlugins/group/bridge/Localhost 0.2
385 TestNetworkPlugins/group/bridge/HairPin 0.24
386 TestNetworkPlugins/group/calico/Start 79.33
387 TestNetworkPlugins/group/custom-flannel/Start 68.1
388 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
389 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.31
390 TestNetworkPlugins/group/calico/ControllerPod 6.01
391 TestNetworkPlugins/group/calico/KubeletFlags 0.33
392 TestNetworkPlugins/group/calico/NetCatPod 9.41
393 TestNetworkPlugins/group/custom-flannel/DNS 0.36
394 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
395 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
396 TestNetworkPlugins/group/calico/DNS 0.29
397 TestNetworkPlugins/group/calico/Localhost 0.19
398 TestNetworkPlugins/group/calico/HairPin 0.24
x
+
TestDownloadOnly/v1.28.0/json-events (8.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-829337 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-829337 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.918444833s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (8.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1124 13:14:05.159904    4178 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1124 13:14:05.160243    4178 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-829337
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-829337: exit status 85 (73.051059ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-829337 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-829337 │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:13:56
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:13:56.285159    4183 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:13:56.285398    4183 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:13:56.285427    4183 out.go:374] Setting ErrFile to fd 2...
	I1124 13:13:56.285446    4183 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:13:56.285741    4183 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2368/.minikube/bin
	W1124 13:13:56.285936    4183 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21932-2368/.minikube/config/config.json: open /home/jenkins/minikube-integration/21932-2368/.minikube/config/config.json: no such file or directory
	I1124 13:13:56.286404    4183 out.go:368] Setting JSON to true
	I1124 13:13:56.287209    4183 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3385,"bootTime":1763986651,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1124 13:13:56.287298    4183 start.go:143] virtualization:  
	I1124 13:13:56.292808    4183 out.go:99] [download-only-829337] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1124 13:13:56.292969    4183 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball: no such file or directory
	I1124 13:13:56.293022    4183 notify.go:221] Checking for updates...
	I1124 13:13:56.296119    4183 out.go:171] MINIKUBE_LOCATION=21932
	I1124 13:13:56.299161    4183 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:13:56.302420    4183 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21932-2368/kubeconfig
	I1124 13:13:56.305610    4183 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2368/.minikube
	I1124 13:13:56.308646    4183 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1124 13:13:56.314536    4183 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 13:13:56.314812    4183 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:13:56.346121    4183 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 13:13:56.346274    4183 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:13:56.753700    4183 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-24 13:13:56.744249357 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 13:13:56.753818    4183 docker.go:319] overlay module found
	I1124 13:13:56.756993    4183 out.go:99] Using the docker driver based on user configuration
	I1124 13:13:56.757021    4183 start.go:309] selected driver: docker
	I1124 13:13:56.757027    4183 start.go:927] validating driver "docker" against <nil>
	I1124 13:13:56.757127    4183 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:13:56.816418    4183 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-24 13:13:56.807729303 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 13:13:56.816573    4183 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:13:56.816861    4183 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1124 13:13:56.817019    4183 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 13:13:56.820030    4183 out.go:171] Using Docker driver with root privileges
	I1124 13:13:56.822874    4183 cni.go:84] Creating CNI manager for ""
	I1124 13:13:56.822939    4183 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:13:56.822952    4183 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 13:13:56.823029    4183 start.go:353] cluster config:
	{Name:download-only-829337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-829337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:13:56.825981    4183 out.go:99] Starting "download-only-829337" primary control-plane node in "download-only-829337" cluster
	I1124 13:13:56.826001    4183 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 13:13:56.828909    4183 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1124 13:13:56.828955    4183 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 13:13:56.829035    4183 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 13:13:56.845138    4183 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1124 13:13:56.845310    4183 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1124 13:13:56.845424    4183 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1124 13:13:56.882044    4183 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1124 13:13:56.882068    4183 cache.go:65] Caching tarball of preloaded images
	I1124 13:13:56.882234    4183 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 13:13:56.885634    4183 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1124 13:13:56.885662    4183 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1124 13:13:56.976318    4183 preload.go:295] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1124 13:13:56.976455    4183 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-829337 host does not exist
	  To start a cluster, run: "minikube start -p download-only-829337"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-829337
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (9.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-681152 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-681152 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.258818638s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (9.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1124 13:14:14.956846    4178 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1124 13:14:14.956880    4178 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-681152
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-681152: exit status 85 (121.111485ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-829337 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-829337 │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │ 24 Nov 25 13:14 UTC │
	│ delete  │ -p download-only-829337                                                                                                                                                               │ download-only-829337 │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │ 24 Nov 25 13:14 UTC │
	│ start   │ -o=json --download-only -p download-only-681152 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-681152 │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:14:05
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:14:05.740538    4385 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:14:05.740706    4385 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:14:05.740736    4385 out.go:374] Setting ErrFile to fd 2...
	I1124 13:14:05.740756    4385 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:14:05.741029    4385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2368/.minikube/bin
	I1124 13:14:05.741448    4385 out.go:368] Setting JSON to true
	I1124 13:14:05.742205    4385 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3395,"bootTime":1763986651,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1124 13:14:05.742297    4385 start.go:143] virtualization:  
	I1124 13:14:05.745648    4385 out.go:99] [download-only-681152] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 13:14:05.745937    4385 notify.go:221] Checking for updates...
	I1124 13:14:05.748876    4385 out.go:171] MINIKUBE_LOCATION=21932
	I1124 13:14:05.751990    4385 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:14:05.754941    4385 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21932-2368/kubeconfig
	I1124 13:14:05.757883    4385 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2368/.minikube
	I1124 13:14:05.760780    4385 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1124 13:14:05.766373    4385 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 13:14:05.766627    4385 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:14:05.801094    4385 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 13:14:05.801211    4385 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:14:05.862266    4385 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-24 13:14:05.853172696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 13:14:05.862370    4385 docker.go:319] overlay module found
	I1124 13:14:05.865377    4385 out.go:99] Using the docker driver based on user configuration
	I1124 13:14:05.865420    4385 start.go:309] selected driver: docker
	I1124 13:14:05.865427    4385 start.go:927] validating driver "docker" against <nil>
	I1124 13:14:05.865531    4385 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:14:05.918269    4385 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-24 13:14:05.909516994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 13:14:05.918454    4385 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:14:05.918739    4385 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1124 13:14:05.918885    4385 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 13:14:05.921988    4385 out.go:171] Using Docker driver with root privileges
	I1124 13:14:05.924731    4385 cni.go:84] Creating CNI manager for ""
	I1124 13:14:05.924803    4385 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 13:14:05.924818    4385 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 13:14:05.924890    4385 start.go:353] cluster config:
	{Name:download-only-681152 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-681152 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:14:05.928114    4385 out.go:99] Starting "download-only-681152" primary control-plane node in "download-only-681152" cluster
	I1124 13:14:05.928137    4385 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 13:14:05.931085    4385 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1124 13:14:05.931133    4385 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 13:14:05.931319    4385 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 13:14:05.946969    4385 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1124 13:14:05.947114    4385 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1124 13:14:05.947139    4385 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1124 13:14:05.947144    4385 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1124 13:14:05.947152    4385 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1124 13:14:06.001966    4385 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1124 13:14:06.001995    4385 cache.go:65] Caching tarball of preloaded images
	I1124 13:14:06.002187    4385 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 13:14:06.008374    4385 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1124 13:14:06.008414    4385 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1124 13:14:06.097000    4385 preload.go:295] Got checksum from GCS API "435977642a202d20ca04f26d87d875a8"
	I1124 13:14:06.097049    4385 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:435977642a202d20ca04f26d87d875a8 -> /home/jenkins/minikube-integration/21932-2368/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-681152 host does not exist
	  To start a cluster, run: "minikube start -p download-only-681152"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-681152
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I1124 13:14:16.132288    4178 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-750885 --alsologtostderr --binary-mirror http://127.0.0.1:41403 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-750885" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-750885
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-384875
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-384875: exit status 85 (77.074653ms)

                                                
                                                
-- stdout --
	* Profile "addons-384875" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-384875"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-384875
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-384875: exit status 85 (71.545777ms)

                                                
                                                
-- stdout --
	* Profile "addons-384875" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-384875"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (165.32s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-384875 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-384875 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m45.318472977s)
--- PASS: TestAddons/Setup (165.32s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.87s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 67.074532ms
addons_test.go:876: volcano-admission stabilized in 67.493398ms
addons_test.go:868: volcano-scheduler stabilized in 67.182274ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-pwlfl" [81783634-d034-48bb-96f4-0e643c9dac56] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.00403681s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-zg8db" [3a7a3966-5923-4774-bb18-8749ba955dde] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00522823s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-5hr5c" [92af2d2b-b304-4d5a-88f0-e2457ba59682] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004306998s
addons_test.go:903: (dbg) Run:  kubectl --context addons-384875 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-384875 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-384875 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [3d8786c2-c1f7-4af9-8d39-16a3e530bd21] Pending
helpers_test.go:352: "test-job-nginx-0" [3d8786c2-c1f7-4af9-8d39-16a3e530bd21] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [3d8786c2-c1f7-4af9-8d39-16a3e530bd21] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.003477451s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-384875 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-384875 addons disable volcano --alsologtostderr -v=1: (12.102854506s)
--- PASS: TestAddons/serial/Volcano (40.87s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-384875 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-384875 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.87s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-384875 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-384875 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [16bf3462-a864-4ca0-a0bc-18d39004cfe2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [16bf3462-a864-4ca0-a0bc-18d39004cfe2] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003355057s
addons_test.go:694: (dbg) Run:  kubectl --context addons-384875 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-384875 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-384875 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-384875 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.87s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 11.482596ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-hq85f" [0675173e-9090-42b1-a569-20916424bc52] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.010471765s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-czswv" [9db591f9-b39c-450d-9c8e-4a7c7e4eb06f] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003297136s
addons_test.go:392: (dbg) Run:  kubectl --context addons-384875 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-384875 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-384875 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.056393916s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-384875 ip
2025/11/24 13:18:18 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-384875 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.16s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.88s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 7.750949ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-384875
addons_test.go:332: (dbg) Run:  kubectl --context addons-384875 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-384875 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.88s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-384875 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-384875 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-384875 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [b7b03f56-a9f3-4537-b9d0-a567b92d4dfe] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [b7b03f56-a9f3-4537-b9d0-a567b92d4dfe] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003620532s
I1124 13:18:47.937543    4178 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-384875 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-384875 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-384875 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-384875 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-384875 addons disable ingress-dns --alsologtostderr -v=1: (1.461957949s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-384875 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-384875 addons disable ingress --alsologtostderr -v=1: (7.909432366s)
--- PASS: TestAddons/parallel/Ingress (20.18s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.82s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-hzchf" [4df0b56d-26a4-42fe-a7e7-45c6e88629b6] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003993151s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-384875 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-384875 addons disable inspektor-gadget --alsologtostderr -v=1: (5.819687658s)
--- PASS: TestAddons/parallel/InspektorGadget (11.82s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.97s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 6.501983ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-pgrhq" [c91ff95a-db42-41de-a98c-f79d23290ad9] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004067032s
addons_test.go:463: (dbg) Run:  kubectl --context addons-384875 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-384875 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.97s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1124 13:18:19.397679    4178 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1124 13:18:19.400576    4178 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1124 13:18:19.400600    4178 kapi.go:107] duration metric: took 7.068189ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 7.079692ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-384875 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-384875 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-384875 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-384875 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-384875 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-384875 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-384875 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-384875 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [f6d89d20-14e5-4278-998b-ae80fdf555f7] Pending
helpers_test.go:352: "task-pv-pod" [f6d89d20-14e5-4278-998b-ae80fdf555f7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [f6d89d20-14e5-4278-998b-ae80fdf555f7] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.00383365s
addons_test.go:572: (dbg) Run:  kubectl --context addons-384875 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-384875 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-384875 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-384875 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-384875 delete pod task-pv-pod: (1.239530733s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-384875 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-384875 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-384875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-384875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-384875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-384875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-384875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-384875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-384875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-384875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-384875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-384875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-384875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-384875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-384875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-384875 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [5afc47ec-2d2e-4d6e-af38-e0bab8f29bd2] Pending
helpers_test.go:352: "task-pv-pod-restore" [5afc47ec-2d2e-4d6e-af38-e0bab8f29bd2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [5afc47ec-2d2e-4d6e-af38-e0bab8f29bd2] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005105424s
addons_test.go:614: (dbg) Run:  kubectl --context addons-384875 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-384875 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-384875 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-384875 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-384875 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-384875 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.015377487s)
--- PASS: TestAddons/parallel/CSI (44.49s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-384875 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-384875 --alsologtostderr -v=1: (1.060735929s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-ffl26" [96adaeb4-1928-41c8-9f7a-d0120a310af8] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-ffl26" [96adaeb4-1928-41c8-9f7a-d0120a310af8] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003530765s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-384875 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-384875 addons disable headlamp --alsologtostderr -v=1: (6.17290187s)
--- PASS: TestAddons/parallel/Headlamp (17.24s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-n8vtb" [c05e4d7e-6339-4a44-b308-8d0c68150d8d] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.002779723s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-384875 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.60s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.04s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-384875 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-384875 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-384875 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-384875 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-384875 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-384875 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-384875 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [d1b15720-0e63-48ff-b945-404c140d9fa8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [d1b15720-0e63-48ff-b945-404c140d9fa8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [d1b15720-0e63-48ff-b945-404c140d9fa8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.002722556s
addons_test.go:967: (dbg) Run:  kubectl --context addons-384875 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-384875 ssh "cat /opt/local-path-provisioner/pvc-75b139ef-3d9f-4efc-bd9a-ff61e047f566_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-384875 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-384875 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-384875 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-384875 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.923509866s)
--- PASS: TestAddons/parallel/LocalPath (51.04s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-6mmpc" [61549f0a-b759-448c-b905-0262e7965aa2] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003842854s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-384875 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-46hsf" [0172098f-9839-40b5-b6b7-ca5b089e78cd] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003231767s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-384875 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-384875 addons disable yakd --alsologtostderr -v=1: (5.802170012s)
--- PASS: TestAddons/parallel/Yakd (11.81s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.37s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-384875
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-384875: (12.090882053s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-384875
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-384875
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-384875
--- PASS: TestAddons/StoppedEnableDisable (12.37s)

                                                
                                    
x
+
TestCertOptions (36.78s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-440754 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-440754 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (33.338861579s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-440754 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-440754 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-440754 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-440754" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-440754
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-440754: (2.666388317s)
--- PASS: TestCertOptions (36.78s)

                                                
                                    
x
+
TestCertExpiration (235.85s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-865605 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-865605 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (42.297605884s)
E1124 13:59:13.430945    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/functional-659953/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-865605 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-865605 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (8.44854371s)
helpers_test.go:175: Cleaning up "cert-expiration-865605" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-865605
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-865605: (5.100612479s)
--- PASS: TestCertExpiration (235.85s)

                                                
                                    
x
+
TestForceSystemdFlag (44.58s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-148052 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-148052 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.678506225s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-148052 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-148052" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-148052
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-148052: (3.52280335s)
--- PASS: TestForceSystemdFlag (44.58s)

                                                
                                    
x
+
TestForceSystemdEnv (47.43s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-134839 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-134839 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (44.114823855s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-134839 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-134839" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-134839
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-134839: (2.910147633s)
--- PASS: TestForceSystemdEnv (47.43s)

                                                
                                    
x
+
TestDockerEnvContainerd (48.45s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-220634 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-220634 --driver=docker  --container-runtime=containerd: (32.377728157s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-220634"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-220634": (1.092527147s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-AU017Aa6fbKr/agent.23434" SSH_AGENT_PID="23435" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-AU017Aa6fbKr/agent.23434" SSH_AGENT_PID="23435" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-AU017Aa6fbKr/agent.23434" SSH_AGENT_PID="23435" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.235281622s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-AU017Aa6fbKr/agent.23434" SSH_AGENT_PID="23435" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-220634" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-220634
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-220634: (2.260214588s)
--- PASS: TestDockerEnvContainerd (48.45s)

                                                
                                    
x
+
TestErrorSpam/setup (33.44s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-014750 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-014750 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-014750 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-014750 --driver=docker  --container-runtime=containerd: (33.441712874s)
--- PASS: TestErrorSpam/setup (33.44s)

                                                
                                    
x
+
TestErrorSpam/start (1.01s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-014750 --log_dir /tmp/nospam-014750 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-014750 --log_dir /tmp/nospam-014750 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-014750 --log_dir /tmp/nospam-014750 start --dry-run
--- PASS: TestErrorSpam/start (1.01s)

                                                
                                    
x
+
TestErrorSpam/status (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-014750 --log_dir /tmp/nospam-014750 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-014750 --log_dir /tmp/nospam-014750 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-014750 --log_dir /tmp/nospam-014750 status
--- PASS: TestErrorSpam/status (1.34s)

                                                
                                    
x
+
TestErrorSpam/pause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-014750 --log_dir /tmp/nospam-014750 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-014750 --log_dir /tmp/nospam-014750 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-014750 --log_dir /tmp/nospam-014750 pause
--- PASS: TestErrorSpam/pause (1.83s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-014750 --log_dir /tmp/nospam-014750 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-014750 --log_dir /tmp/nospam-014750 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-014750 --log_dir /tmp/nospam-014750 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-014750 --log_dir /tmp/nospam-014750 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-014750 --log_dir /tmp/nospam-014750 stop: (1.396986862s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-014750 --log_dir /tmp/nospam-014750 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-014750 --log_dir /tmp/nospam-014750 stop
--- PASS: TestErrorSpam/stop (1.59s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21932-2368/.minikube/files/etc/test/nested/copy/4178/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (82.22s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-659953 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1124 13:22:02.200552    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/addons-384875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:02.207411    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/addons-384875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:02.218797    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/addons-384875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:02.240195    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/addons-384875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:02.281556    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/addons-384875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:02.362967    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/addons-384875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:02.524484    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/addons-384875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:02.846146    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/addons-384875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:03.488165    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/addons-384875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:04.769441    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/addons-384875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:07.330902    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/addons-384875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:12.452298    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/addons-384875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:22.693609    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/addons-384875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:43.175020    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/addons-384875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-659953 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m22.222262253s)
--- PASS: TestFunctional/serial/StartWithProxy (82.22s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.07s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1124 13:23:04.790367    4178 config.go:182] Loaded profile config "functional-659953": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-659953 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-659953 --alsologtostderr -v=8: (7.067180496s)
functional_test.go:678: soft start took 7.069432726s for "functional-659953" cluster.
I1124 13:23:11.857862    4178 config.go:182] Loaded profile config "functional-659953": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (7.07s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-659953 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-659953 cache add registry.k8s.io/pause:3.1: (1.284801851s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-659953 cache add registry.k8s.io/pause:3.3: (1.065674795s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-659953 cache add registry.k8s.io/pause:latest: (1.048323143s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-659953 /tmp/TestFunctionalserialCacheCmdcacheadd_local890016212/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 cache add minikube-local-cache-test:functional-659953
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 cache delete minikube-local-cache-test:functional-659953
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-659953
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-659953 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (281.150348ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-arm64 -p functional-659953 cache reload: (1.007046241s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 kubectl -- --context functional-659953 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-659953 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.3s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-659953 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1124 13:23:24.136488    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/addons-384875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-659953 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.304349512s)
functional_test.go:776: restart took 44.304451207s for "functional-659953" cluster.
I1124 13:24:03.655640    4178 config.go:182] Loaded profile config "functional-659953": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (44.30s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-659953 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-659953 logs: (1.487783821s)
--- PASS: TestFunctional/serial/LogsCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 logs --file /tmp/TestFunctionalserialLogsFileCmd4172663627/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-659953 logs --file /tmp/TestFunctionalserialLogsFileCmd4172663627/001/logs.txt: (1.509535224s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.7s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-659953 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-659953
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-659953: exit status 115 (462.85462ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30207 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-659953 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.70s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-659953 config get cpus: exit status 14 (80.323862ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-659953 config get cpus: exit status 14 (68.691599ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-659953 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-659953 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 38832: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.85s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-659953 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-659953 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (196.166684ms)

                                                
                                                
-- stdout --
	* [functional-659953] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21932-2368/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2368/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:24:43.766303   38514 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:24:43.766465   38514 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:24:43.766496   38514 out.go:374] Setting ErrFile to fd 2...
	I1124 13:24:43.766517   38514 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:24:43.766785   38514 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2368/.minikube/bin
	I1124 13:24:43.767198   38514 out.go:368] Setting JSON to false
	I1124 13:24:43.768170   38514 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4033,"bootTime":1763986651,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1124 13:24:43.768274   38514 start.go:143] virtualization:  
	I1124 13:24:43.769756   38514 out.go:179] * [functional-659953] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 13:24:43.772562   38514 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:24:43.772675   38514 notify.go:221] Checking for updates...
	I1124 13:24:43.776352   38514 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:24:43.778076   38514 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2368/kubeconfig
	I1124 13:24:43.779160   38514 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2368/.minikube
	I1124 13:24:43.780298   38514 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 13:24:43.782295   38514 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:24:43.784761   38514 config.go:182] Loaded profile config "functional-659953": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:24:43.785381   38514 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:24:43.822632   38514 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 13:24:43.822755   38514 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:24:43.900889   38514 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-24 13:24:43.885108376 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 13:24:43.901002   38514 docker.go:319] overlay module found
	I1124 13:24:43.902589   38514 out.go:179] * Using the docker driver based on existing profile
	I1124 13:24:43.903718   38514 start.go:309] selected driver: docker
	I1124 13:24:43.903735   38514 start.go:927] validating driver "docker" against &{Name:functional-659953 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-659953 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:24:43.903836   38514 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:24:43.905563   38514 out.go:203] 
	W1124 13:24:43.906660   38514 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1124 13:24:43.907734   38514 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-659953 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-659953 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-659953 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (180.821287ms)

                                                
                                                
-- stdout --
	* [functional-659953] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21932-2368/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2368/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:24:43.589967   38469 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:24:43.590094   38469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:24:43.590105   38469 out.go:374] Setting ErrFile to fd 2...
	I1124 13:24:43.590109   38469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:24:43.591039   38469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2368/.minikube/bin
	I1124 13:24:43.591419   38469 out.go:368] Setting JSON to false
	I1124 13:24:43.592363   38469 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4033,"bootTime":1763986651,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1124 13:24:43.592432   38469 start.go:143] virtualization:  
	I1124 13:24:43.593844   38469 out.go:179] * [functional-659953] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1124 13:24:43.595188   38469 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:24:43.595279   38469 notify.go:221] Checking for updates...
	I1124 13:24:43.597468   38469 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:24:43.598740   38469 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2368/kubeconfig
	I1124 13:24:43.599901   38469 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2368/.minikube
	I1124 13:24:43.601164   38469 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 13:24:43.602334   38469 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:24:43.604042   38469 config.go:182] Loaded profile config "functional-659953": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:24:43.604654   38469 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:24:43.637051   38469 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 13:24:43.637157   38469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:24:43.704602   38469 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-24 13:24:43.695449458 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 13:24:43.704712   38469 docker.go:319] overlay module found
	I1124 13:24:43.706210   38469 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1124 13:24:43.707300   38469 start.go:309] selected driver: docker
	I1124 13:24:43.707317   38469 start.go:927] validating driver "docker" against &{Name:functional-659953 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-659953 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:24:43.707422   38469 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:24:43.709269   38469 out.go:203] 
	W1124 13:24:43.710342   38469 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1124 13:24:43.711376   38469 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-659953 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-659953 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-qrk7d" [472720b8-98c5-4788-b152-ff93970416df] Pending
helpers_test.go:352: "hello-node-connect-7d85dfc575-qrk7d" [472720b8-98c5-4788-b152-ff93970416df] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-qrk7d" [472720b8-98c5-4788-b152-ff93970416df] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004012182s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32003
functional_test.go:1680: http://192.168.49.2:32003: success! body:
Request served by hello-node-connect-7d85dfc575-qrk7d

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32003
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.78s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [2ff26b76-9751-43d7-aa6e-4e6859dadda7] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005641686s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-659953 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-659953 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-659953 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-659953 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [1db3fedd-17f5-4608-92e8-22b494f0b8ad] Pending
helpers_test.go:352: "sp-pod" [1db3fedd-17f5-4608-92e8-22b494f0b8ad] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [1db3fedd-17f5-4608-92e8-22b494f0b8ad] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003046181s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-659953 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-659953 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-659953 delete -f testdata/storage-provisioner/pod.yaml: (1.329957753s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-659953 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [de4ef9bb-9a1a-4e01-9a3b-148df5a5de7f] Pending
helpers_test.go:352: "sp-pod" [de4ef9bb-9a1a-4e01-9a3b-148df5a5de7f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [de4ef9bb-9a1a-4e01-9a3b-148df5a5de7f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.021048414s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-659953 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.43s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh -n functional-659953 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 cp functional-659953:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2589560541/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh -n functional-659953 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh -n functional-659953 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.35s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4178/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh "sudo cat /etc/test/nested/copy/4178/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4178.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh "sudo cat /etc/ssl/certs/4178.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4178.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh "sudo cat /usr/share/ca-certificates/4178.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/41782.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh "sudo cat /etc/ssl/certs/41782.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/41782.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh "sudo cat /usr/share/ca-certificates/41782.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-659953 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-659953 ssh "sudo systemctl is-active docker": exit status 1 (367.020945ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-659953 ssh "sudo systemctl is-active crio": exit status 1 (342.174012ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-659953 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-659953 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-659953 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-659953 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 36226: os: process already finished
helpers_test.go:525: unable to kill pid 36015: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-659953 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-659953 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [511fa36f-253f-4ee7-b1e2-a3eafb5b821c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [511fa36f-253f-4ee7-b1e2-a3eafb5b821c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.002955856s
I1124 13:24:22.848101    4178 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-659953 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.111.214 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-659953 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-659953 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-659953 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-mthpj" [0d9a63f0-0302-4841-b25b-c4684c3da22e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-mthpj" [0d9a63f0-0302-4841-b25b-c4684c3da22e] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003791396s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 service list -o json
functional_test.go:1504: Took "620.851697ms" to run "out/minikube-linux-arm64 -p functional-659953 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30618
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "510.244255ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "62.338771ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "470.787485ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "71.453163ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30618
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-659953 /tmp/TestFunctionalparallelMountCmdany-port1950245523/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763990681884350363" to /tmp/TestFunctionalparallelMountCmdany-port1950245523/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763990681884350363" to /tmp/TestFunctionalparallelMountCmdany-port1950245523/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763990681884350363" to /tmp/TestFunctionalparallelMountCmdany-port1950245523/001/test-1763990681884350363
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 24 13:24 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 24 13:24 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 24 13:24 test-1763990681884350363
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh cat /mount-9p/test-1763990681884350363
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-659953 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [468b0dd5-71b0-4daa-93ad-70537e2185ba] Pending
helpers_test.go:352: "busybox-mount" [468b0dd5-71b0-4daa-93ad-70537e2185ba] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E1124 13:24:46.058125    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/addons-384875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox-mount" [468b0dd5-71b0-4daa-93ad-70537e2185ba] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [468b0dd5-71b0-4daa-93ad-70537e2185ba] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003620035s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-659953 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-659953 /tmp/TestFunctionalparallelMountCmdany-port1950245523/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.07s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-659953 /tmp/TestFunctionalparallelMountCmdspecific-port557640911/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-659953 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (606.13656ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 13:24:50.559210    4178 retry.go:31] will retry after 443.757416ms: exit status 1
2025/11/24 13:24:50 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-659953 /tmp/TestFunctionalparallelMountCmdspecific-port557640911/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-659953 ssh "sudo umount -f /mount-9p": exit status 1 (386.499589ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-659953 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-659953 /tmp/TestFunctionalparallelMountCmdspecific-port557640911/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-659953 version -o=json --components: (1.417634773s)
--- PASS: TestFunctional/parallel/Version/components (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-659953 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-659953
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-659953
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-659953 image ls --format short --alsologtostderr:
I1124 13:24:59.475611   41684 out.go:360] Setting OutFile to fd 1 ...
I1124 13:24:59.475904   41684 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:24:59.475949   41684 out.go:374] Setting ErrFile to fd 2...
I1124 13:24:59.475956   41684 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:24:59.476239   41684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2368/.minikube/bin
I1124 13:24:59.476873   41684 config.go:182] Loaded profile config "functional-659953": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 13:24:59.476983   41684 config.go:182] Loaded profile config "functional-659953": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 13:24:59.477495   41684 cli_runner.go:164] Run: docker container inspect functional-659953 --format={{.State.Status}}
I1124 13:24:59.497878   41684 ssh_runner.go:195] Run: systemctl --version
I1124 13:24:59.497950   41684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-659953
I1124 13:24:59.537206   41684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/functional-659953/id_rsa Username:docker}
I1124 13:24:59.647792   41684 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-659953 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/minikube-local-cache-test │ functional-659953  │ sha256:3fefb9 │ 991B   │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ docker.io/kicbase/echo-server               │ functional-659953  │ sha256:ce2d2c │ 2.17MB │
│ docker.io/library/nginx                     │ alpine             │ sha256:cbad63 │ 23.1MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:43911e │ 24.6MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:b5f57e │ 15.8MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:a18947 │ 98.2MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:05baa9 │ 22.8MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ docker.io/library/nginx                     │ latest             │ sha256:bb747c │ 58.3MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:7eb2c6 │ 20.7MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-659953 image ls --format table --alsologtostderr:
I1124 13:25:00.668633   41915 out.go:360] Setting OutFile to fd 1 ...
I1124 13:25:00.668898   41915 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:25:00.668936   41915 out.go:374] Setting ErrFile to fd 2...
I1124 13:25:00.668957   41915 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:25:00.669290   41915 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2368/.minikube/bin
I1124 13:25:00.669979   41915 config.go:182] Loaded profile config "functional-659953": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 13:25:00.670165   41915 config.go:182] Loaded profile config "functional-659953": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 13:25:00.670742   41915 cli_runner.go:164] Run: docker container inspect functional-659953 --format={{.State.Status}}
I1124 13:25:00.690836   41915 ssh_runner.go:195] Run: systemctl --version
I1124 13:25:00.690899   41915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-659953
I1124 13:25:00.725268   41915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/functional-659953/id_rsa Username:docker}
I1124 13:25:00.838905   41915 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-659953 image ls --format json --alsologtostderr:
[{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42"],"repoTags":["docker.io/library/nginx:latest"],"size":"58263548"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d
2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:3fefb95d891418be966625a5d56820b4caf80e49b6274beb9a1dc6d65cddff8f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-659953"],"size":"991"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["re
gistry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"23117513"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"98207481"},{"id":"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196",
"repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"24571109"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-659953"],"size":"2173567"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"20720058"},{"id":"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b5
8cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"22788047"},{"id":"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"15779817"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-659953 image ls --format json --alsologtostderr:
I1124 13:24:59.984508   41825 out.go:360] Setting OutFile to fd 1 ...
I1124 13:24:59.984775   41825 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:24:59.984805   41825 out.go:374] Setting ErrFile to fd 2...
I1124 13:24:59.984826   41825 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:24:59.985148   41825 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2368/.minikube/bin
I1124 13:24:59.986598   41825 config.go:182] Loaded profile config "functional-659953": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 13:24:59.986787   41825 config.go:182] Loaded profile config "functional-659953": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 13:24:59.987372   41825 cli_runner.go:164] Run: docker container inspect functional-659953 --format={{.State.Status}}
I1124 13:25:00.123056   41825 ssh_runner.go:195] Run: systemctl --version
I1124 13:25:00.123134   41825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-659953
I1124 13:25:00.240477   41825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/functional-659953/id_rsa Username:docker}
I1124 13:25:00.508754   41825 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-659953 image ls --format yaml --alsologtostderr:
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-659953
size: "2173567"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:3fefb95d891418be966625a5d56820b4caf80e49b6274beb9a1dc6d65cddff8f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-659953
size: "991"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "24571109"
- id: sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "20720058"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "98207481"
- id: sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "15779817"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "23117513"
- id: sha256:bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
repoTags:
- docker.io/library/nginx:latest
size: "58263548"
- id: sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "22788047"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-659953 image ls --format yaml --alsologtostderr:
I1124 13:24:59.640695   41740 out.go:360] Setting OutFile to fd 1 ...
I1124 13:24:59.640939   41740 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:24:59.640970   41740 out.go:374] Setting ErrFile to fd 2...
I1124 13:24:59.640991   41740 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:24:59.641466   41740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2368/.minikube/bin
I1124 13:24:59.642485   41740 config.go:182] Loaded profile config "functional-659953": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 13:24:59.642740   41740 config.go:182] Loaded profile config "functional-659953": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 13:24:59.643594   41740 cli_runner.go:164] Run: docker container inspect functional-659953 --format={{.State.Status}}
I1124 13:24:59.668503   41740 ssh_runner.go:195] Run: systemctl --version
I1124 13:24:59.668561   41740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-659953
I1124 13:24:59.693788   41740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/functional-659953/id_rsa Username:docker}
I1124 13:24:59.808198   41740 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-659953 ssh pgrep buildkitd: exit status 1 (610.001922ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 image build -t localhost/my-image:functional-659953 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-659953 image build -t localhost/my-image:functional-659953 testdata/build --alsologtostderr: (3.591158347s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-659953 image build -t localhost/my-image:functional-659953 testdata/build --alsologtostderr:
I1124 13:25:00.476478   41878 out.go:360] Setting OutFile to fd 1 ...
I1124 13:25:00.476769   41878 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:25:00.476805   41878 out.go:374] Setting ErrFile to fd 2...
I1124 13:25:00.476832   41878 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:25:00.477342   41878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2368/.minikube/bin
I1124 13:25:00.478690   41878 config.go:182] Loaded profile config "functional-659953": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 13:25:00.489673   41878 config.go:182] Loaded profile config "functional-659953": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 13:25:00.490326   41878 cli_runner.go:164] Run: docker container inspect functional-659953 --format={{.State.Status}}
I1124 13:25:00.540176   41878 ssh_runner.go:195] Run: systemctl --version
I1124 13:25:00.540271   41878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-659953
I1124 13:25:00.581188   41878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/functional-659953/id_rsa Username:docker}
I1124 13:25:00.712299   41878 build_images.go:162] Building image from path: /tmp/build.788329062.tar
I1124 13:25:00.712397   41878 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1124 13:25:00.722312   41878 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.788329062.tar
I1124 13:25:00.728594   41878 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.788329062.tar: stat -c "%s %y" /var/lib/minikube/build/build.788329062.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.788329062.tar': No such file or directory
I1124 13:25:00.728632   41878 ssh_runner.go:362] scp /tmp/build.788329062.tar --> /var/lib/minikube/build/build.788329062.tar (3072 bytes)
I1124 13:25:00.757439   41878 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.788329062
I1124 13:25:00.767823   41878 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.788329062 -xf /var/lib/minikube/build/build.788329062.tar
I1124 13:25:00.781194   41878 containerd.go:394] Building image: /var/lib/minikube/build/build.788329062
I1124 13:25:00.781266   41878 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.788329062 --local dockerfile=/var/lib/minikube/build/build.788329062 --output type=image,name=localhost/my-image:functional-659953
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:918ac76b5be4b6c8658da094ab05c7398071f0924bf8a337599bcd6e3e0ba7c6
#8 exporting manifest sha256:918ac76b5be4b6c8658da094ab05c7398071f0924bf8a337599bcd6e3e0ba7c6 0.0s done
#8 exporting config sha256:77eedb46f203de41d7ba2d4d5b3c7f4443c2684e670473c575310b14d89b4b9a 0.0s done
#8 naming to localhost/my-image:functional-659953 done
#8 DONE 0.2s
I1124 13:25:03.873668   41878 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.788329062 --local dockerfile=/var/lib/minikube/build/build.788329062 --output type=image,name=localhost/my-image:functional-659953: (3.09236994s)
I1124 13:25:03.873740   41878 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.788329062
I1124 13:25:03.882247   41878 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.788329062.tar
I1124 13:25:03.890421   41878 build_images.go:218] Built localhost/my-image:functional-659953 from /tmp/build.788329062.tar
I1124 13:25:03.890454   41878 build_images.go:134] succeeded building to: functional-659953
I1124 13:25:03.890459   41878 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-659953
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-659953 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4197463809/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-659953 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4197463809/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-659953 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4197463809/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-659953 ssh "findmnt -T" /mount1: exit status 1 (727.584152ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 13:24:53.073173    4178 retry.go:31] will retry after 743.246373ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-659953 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-659953 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4197463809/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-659953 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4197463809/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-659953 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4197463809/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 image load --daemon kicbase/echo-server:functional-659953 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-659953 image load --daemon kicbase/echo-server:functional-659953 --alsologtostderr: (1.022760983s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 image load --daemon kicbase/echo-server:functional-659953 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-659953
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 image load --daemon kicbase/echo-server:functional-659953 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 image save kicbase/echo-server:functional-659953 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 image rm kicbase/echo-server:functional-659953 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-659953
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-659953 image save --daemon kicbase/echo-server:functional-659953 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-659953
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-659953
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-659953
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-659953
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (183.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1124 13:27:02.198178    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/addons-384875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:27:29.901643    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/addons-384875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-277038 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (3m2.186277818s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (183.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-277038 kubectl -- rollout status deployment/busybox: (3.986234064s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 kubectl -- exec busybox-7b57f96db7-lmwmh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 kubectl -- exec busybox-7b57f96db7-rbgzd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 kubectl -- exec busybox-7b57f96db7-vrr65 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 kubectl -- exec busybox-7b57f96db7-lmwmh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 kubectl -- exec busybox-7b57f96db7-rbgzd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 kubectl -- exec busybox-7b57f96db7-vrr65 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 kubectl -- exec busybox-7b57f96db7-lmwmh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 kubectl -- exec busybox-7b57f96db7-rbgzd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 kubectl -- exec busybox-7b57f96db7-vrr65 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 kubectl -- exec busybox-7b57f96db7-lmwmh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 kubectl -- exec busybox-7b57f96db7-lmwmh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 kubectl -- exec busybox-7b57f96db7-rbgzd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 kubectl -- exec busybox-7b57f96db7-rbgzd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 kubectl -- exec busybox-7b57f96db7-vrr65 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 kubectl -- exec busybox-7b57f96db7-vrr65 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (61.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 node add --alsologtostderr -v 5
E1124 13:29:13.430558    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/functional-659953/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:29:13.437043    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/functional-659953/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:29:13.448457    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/functional-659953/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:29:13.469829    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/functional-659953/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:29:13.511113    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/functional-659953/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:29:13.592651    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/functional-659953/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:29:13.754097    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/functional-659953/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:29:14.075532    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/functional-659953/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:29:14.717132    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/functional-659953/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:29:16.009876    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/functional-659953/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:29:18.571227    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/functional-659953/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-277038 node add --alsologtostderr -v 5: (1m0.190188233s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-277038 status --alsologtostderr -v 5: (1.078030938s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (61.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-277038 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.078551292s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-277038 status --output json --alsologtostderr -v 5: (1.088662237s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 cp testdata/cp-test.txt ha-277038:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 cp ha-277038:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1576535975/001/cp-test_ha-277038.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 cp ha-277038:/home/docker/cp-test.txt ha-277038-m02:/home/docker/cp-test_ha-277038_ha-277038-m02.txt
E1124 13:29:23.697200    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/functional-659953/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038-m02 "sudo cat /home/docker/cp-test_ha-277038_ha-277038-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 cp ha-277038:/home/docker/cp-test.txt ha-277038-m03:/home/docker/cp-test_ha-277038_ha-277038-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038-m03 "sudo cat /home/docker/cp-test_ha-277038_ha-277038-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 cp ha-277038:/home/docker/cp-test.txt ha-277038-m04:/home/docker/cp-test_ha-277038_ha-277038-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038-m04 "sudo cat /home/docker/cp-test_ha-277038_ha-277038-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 cp testdata/cp-test.txt ha-277038-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 cp ha-277038-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1576535975/001/cp-test_ha-277038-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 cp ha-277038-m02:/home/docker/cp-test.txt ha-277038:/home/docker/cp-test_ha-277038-m02_ha-277038.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038 "sudo cat /home/docker/cp-test_ha-277038-m02_ha-277038.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 cp ha-277038-m02:/home/docker/cp-test.txt ha-277038-m03:/home/docker/cp-test_ha-277038-m02_ha-277038-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038-m03 "sudo cat /home/docker/cp-test_ha-277038-m02_ha-277038-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 cp ha-277038-m02:/home/docker/cp-test.txt ha-277038-m04:/home/docker/cp-test_ha-277038-m02_ha-277038-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038-m04 "sudo cat /home/docker/cp-test_ha-277038-m02_ha-277038-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 cp testdata/cp-test.txt ha-277038-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 cp ha-277038-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1576535975/001/cp-test_ha-277038-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 cp ha-277038-m03:/home/docker/cp-test.txt ha-277038:/home/docker/cp-test_ha-277038-m03_ha-277038.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038-m03 "sudo cat /home/docker/cp-test.txt"
E1124 13:29:33.939003    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/functional-659953/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038 "sudo cat /home/docker/cp-test_ha-277038-m03_ha-277038.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 cp ha-277038-m03:/home/docker/cp-test.txt ha-277038-m02:/home/docker/cp-test_ha-277038-m03_ha-277038-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038-m02 "sudo cat /home/docker/cp-test_ha-277038-m03_ha-277038-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 cp ha-277038-m03:/home/docker/cp-test.txt ha-277038-m04:/home/docker/cp-test_ha-277038-m03_ha-277038-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038-m04 "sudo cat /home/docker/cp-test_ha-277038-m03_ha-277038-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 cp testdata/cp-test.txt ha-277038-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 cp ha-277038-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1576535975/001/cp-test_ha-277038-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 cp ha-277038-m04:/home/docker/cp-test.txt ha-277038:/home/docker/cp-test_ha-277038-m04_ha-277038.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038 "sudo cat /home/docker/cp-test_ha-277038-m04_ha-277038.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 cp ha-277038-m04:/home/docker/cp-test.txt ha-277038-m02:/home/docker/cp-test_ha-277038-m04_ha-277038-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038-m02 "sudo cat /home/docker/cp-test_ha-277038-m04_ha-277038-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 cp ha-277038-m04:/home/docker/cp-test.txt ha-277038-m03:/home/docker/cp-test_ha-277038-m04_ha-277038-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 ssh -n ha-277038-m03 "sudo cat /home/docker/cp-test_ha-277038-m04_ha-277038-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-277038 node stop m02 --alsologtostderr -v 5: (12.183439425s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 status --alsologtostderr -v 5
E1124 13:29:54.420306    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/functional-659953/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-277038 status --alsologtostderr -v 5: exit status 7 (813.39172ms)

                                                
                                                
-- stdout --
	ha-277038
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-277038-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-277038-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-277038-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:29:53.898819   58263 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:29:53.899184   58263 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:29:53.899229   58263 out.go:374] Setting ErrFile to fd 2...
	I1124 13:29:53.899256   58263 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:29:53.899748   58263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2368/.minikube/bin
	I1124 13:29:53.900148   58263 out.go:368] Setting JSON to false
	I1124 13:29:53.900300   58263 mustload.go:66] Loading cluster: ha-277038
	I1124 13:29:53.900692   58263 notify.go:221] Checking for updates...
	I1124 13:29:53.902234   58263 config.go:182] Loaded profile config "ha-277038": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:29:53.902263   58263 status.go:174] checking status of ha-277038 ...
	I1124 13:29:53.902814   58263 cli_runner.go:164] Run: docker container inspect ha-277038 --format={{.State.Status}}
	I1124 13:29:53.921497   58263 status.go:371] ha-277038 host status = "Running" (err=<nil>)
	I1124 13:29:53.921520   58263 host.go:66] Checking if "ha-277038" exists ...
	I1124 13:29:53.921834   58263 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-277038
	I1124 13:29:53.952788   58263 host.go:66] Checking if "ha-277038" exists ...
	I1124 13:29:53.953106   58263 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:29:53.953202   58263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-277038
	I1124 13:29:53.984032   58263 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/ha-277038/id_rsa Username:docker}
	I1124 13:29:54.093891   58263 ssh_runner.go:195] Run: systemctl --version
	I1124 13:29:54.100355   58263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:29:54.114586   58263 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:29:54.182565   58263 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-24 13:29:54.172758356 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 13:29:54.183120   58263 kubeconfig.go:125] found "ha-277038" server: "https://192.168.49.254:8443"
	I1124 13:29:54.183152   58263 api_server.go:166] Checking apiserver status ...
	I1124 13:29:54.183251   58263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:29:54.196987   58263 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1398/cgroup
	I1124 13:29:54.205109   58263 api_server.go:182] apiserver freezer: "4:freezer:/docker/ba1d52d3531e31675d3a6c5727e348f0ffc13a2196c54afd12064ea9831aa83b/kubepods/burstable/pod2cbc047cc3e92cebb75f8a6f6390285d/d2b2054f5a8ae43ecb36384c6fffd951a61cf6e0ba0d457795bcf151703ed419"
	I1124 13:29:54.205191   58263 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ba1d52d3531e31675d3a6c5727e348f0ffc13a2196c54afd12064ea9831aa83b/kubepods/burstable/pod2cbc047cc3e92cebb75f8a6f6390285d/d2b2054f5a8ae43ecb36384c6fffd951a61cf6e0ba0d457795bcf151703ed419/freezer.state
	I1124 13:29:54.215873   58263 api_server.go:204] freezer state: "THAWED"
	I1124 13:29:54.215903   58263 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1124 13:29:54.225326   58263 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1124 13:29:54.225354   58263 status.go:463] ha-277038 apiserver status = Running (err=<nil>)
	I1124 13:29:54.225365   58263 status.go:176] ha-277038 status: &{Name:ha-277038 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:29:54.225404   58263 status.go:174] checking status of ha-277038-m02 ...
	I1124 13:29:54.225737   58263 cli_runner.go:164] Run: docker container inspect ha-277038-m02 --format={{.State.Status}}
	I1124 13:29:54.245637   58263 status.go:371] ha-277038-m02 host status = "Stopped" (err=<nil>)
	I1124 13:29:54.245660   58263 status.go:384] host is not running, skipping remaining checks
	I1124 13:29:54.245667   58263 status.go:176] ha-277038-m02 status: &{Name:ha-277038-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:29:54.245686   58263 status.go:174] checking status of ha-277038-m03 ...
	I1124 13:29:54.246014   58263 cli_runner.go:164] Run: docker container inspect ha-277038-m03 --format={{.State.Status}}
	I1124 13:29:54.264458   58263 status.go:371] ha-277038-m03 host status = "Running" (err=<nil>)
	I1124 13:29:54.264484   58263 host.go:66] Checking if "ha-277038-m03" exists ...
	I1124 13:29:54.264805   58263 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-277038-m03
	I1124 13:29:54.281932   58263 host.go:66] Checking if "ha-277038-m03" exists ...
	I1124 13:29:54.282249   58263 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:29:54.282291   58263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-277038-m03
	I1124 13:29:54.298848   58263 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/ha-277038-m03/id_rsa Username:docker}
	I1124 13:29:54.405673   58263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:29:54.419022   58263 kubeconfig.go:125] found "ha-277038" server: "https://192.168.49.254:8443"
	I1124 13:29:54.419055   58263 api_server.go:166] Checking apiserver status ...
	I1124 13:29:54.419110   58263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:29:54.431682   58263 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1406/cgroup
	I1124 13:29:54.439813   58263 api_server.go:182] apiserver freezer: "4:freezer:/docker/17db4e7e5c9862e3724ab0270eb9791fbddeb13b1b0be3b519f11796bceb6ac7/kubepods/burstable/pod3e98d33b8f8cd1dab0d999d7e138461a/6fdea398df1e962182b4e3bad3e03e2aba74193b97fde75fbbd661aae50d3dc8"
	I1124 13:29:54.439969   58263 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/17db4e7e5c9862e3724ab0270eb9791fbddeb13b1b0be3b519f11796bceb6ac7/kubepods/burstable/pod3e98d33b8f8cd1dab0d999d7e138461a/6fdea398df1e962182b4e3bad3e03e2aba74193b97fde75fbbd661aae50d3dc8/freezer.state
	I1124 13:29:54.454811   58263 api_server.go:204] freezer state: "THAWED"
	I1124 13:29:54.454877   58263 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1124 13:29:54.466542   58263 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1124 13:29:54.466583   58263 status.go:463] ha-277038-m03 apiserver status = Running (err=<nil>)
	I1124 13:29:54.466609   58263 status.go:176] ha-277038-m03 status: &{Name:ha-277038-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:29:54.466633   58263 status.go:174] checking status of ha-277038-m04 ...
	I1124 13:29:54.466976   58263 cli_runner.go:164] Run: docker container inspect ha-277038-m04 --format={{.State.Status}}
	I1124 13:29:54.484490   58263 status.go:371] ha-277038-m04 host status = "Running" (err=<nil>)
	I1124 13:29:54.484517   58263 host.go:66] Checking if "ha-277038-m04" exists ...
	I1124 13:29:54.484846   58263 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-277038-m04
	I1124 13:29:54.504092   58263 host.go:66] Checking if "ha-277038-m04" exists ...
	I1124 13:29:54.504525   58263 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:29:54.504575   58263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-277038-m04
	I1124 13:29:54.523984   58263 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/ha-277038-m04/id_rsa Username:docker}
	I1124 13:29:54.629115   58263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:29:54.642204   58263 status.go:176] ha-277038-m04 status: &{Name:ha-277038-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (13.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-277038 node start m02 --alsologtostderr -v 5: (11.423084215s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-277038 status --alsologtostderr -v 5: (1.518452888s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (13.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.496230911s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (99.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 stop --alsologtostderr -v 5
E1124 13:30:35.382633    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/functional-659953/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-277038 stop --alsologtostderr -v 5: (37.730125042s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-277038 start --wait true --alsologtostderr -v 5: (1m1.139954031s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (99.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 node delete m03 --alsologtostderr -v 5
E1124 13:31:57.304323    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/functional-659953/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-277038 node delete m03 --alsologtostderr -v 5: (10.390849569s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Done: out/minikube-linux-arm64 -p ha-277038 status --alsologtostderr -v 5: (1.251108857s)
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 stop --alsologtostderr -v 5
E1124 13:32:02.198182    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/addons-384875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-277038 stop --alsologtostderr -v 5: (36.280603108s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-277038 status --alsologtostderr -v 5: exit status 7 (136.240214ms)

                                                
                                                
-- stdout --
	ha-277038
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-277038-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-277038-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:32:38.076621   73068 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:32:38.076758   73068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:32:38.076767   73068 out.go:374] Setting ErrFile to fd 2...
	I1124 13:32:38.076777   73068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:32:38.077175   73068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2368/.minikube/bin
	I1124 13:32:38.077414   73068 out.go:368] Setting JSON to false
	I1124 13:32:38.077442   73068 mustload.go:66] Loading cluster: ha-277038
	I1124 13:32:38.077873   73068 config.go:182] Loaded profile config "ha-277038": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:32:38.077891   73068 status.go:174] checking status of ha-277038 ...
	I1124 13:32:38.078511   73068 cli_runner.go:164] Run: docker container inspect ha-277038 --format={{.State.Status}}
	I1124 13:32:38.078826   73068 notify.go:221] Checking for updates...
	I1124 13:32:38.099899   73068 status.go:371] ha-277038 host status = "Stopped" (err=<nil>)
	I1124 13:32:38.100021   73068 status.go:384] host is not running, skipping remaining checks
	I1124 13:32:38.100029   73068 status.go:176] ha-277038 status: &{Name:ha-277038 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:32:38.100068   73068 status.go:174] checking status of ha-277038-m02 ...
	I1124 13:32:38.100374   73068 cli_runner.go:164] Run: docker container inspect ha-277038-m02 --format={{.State.Status}}
	I1124 13:32:38.138182   73068 status.go:371] ha-277038-m02 host status = "Stopped" (err=<nil>)
	I1124 13:32:38.138210   73068 status.go:384] host is not running, skipping remaining checks
	I1124 13:32:38.138217   73068 status.go:176] ha-277038-m02 status: &{Name:ha-277038-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:32:38.138239   73068 status.go:174] checking status of ha-277038-m04 ...
	I1124 13:32:38.138564   73068 cli_runner.go:164] Run: docker container inspect ha-277038-m04 --format={{.State.Status}}
	I1124 13:32:38.159048   73068 status.go:371] ha-277038-m04 host status = "Stopped" (err=<nil>)
	I1124 13:32:38.159069   73068 status.go:384] host is not running, skipping remaining checks
	I1124 13:32:38.159088   73068 status.go:176] ha-277038-m04 status: &{Name:ha-277038-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (60.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-277038 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (59.486709488s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (60.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (82.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 node add --control-plane --alsologtostderr -v 5
E1124 13:34:13.430069    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/functional-659953/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:34:41.146422    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/functional-659953/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-277038 node add --control-plane --alsologtostderr -v 5: (1m21.290270918s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-277038 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-277038 status --alsologtostderr -v 5: (1.129898387s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (82.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.079843518s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                    
x
+
TestJSONOutput/start/Command (53.51s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-545148 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-545148 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (53.502450278s)
--- PASS: TestJSONOutput/start/Command (53.51s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-545148 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-545148 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.47s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-545148 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-545148 --output=json --user=testUser: (1.467306863s)
--- PASS: TestJSONOutput/stop/Command (1.47s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-263551 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-263551 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (98.703953ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f48f1a58-492b-4dbb-b896-90ed3f699d8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-263551] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"40756cbe-96aa-449b-a6f9-43186aa70a18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21932"}}
	{"specversion":"1.0","id":"47bf76af-2315-4e87-90c8-f61da05960d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6788a59f-f542-4ad6-9eeb-c8f830f01c6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21932-2368/kubeconfig"}}
	{"specversion":"1.0","id":"721b8758-64f1-4e05-951e-28fe259a6fd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2368/.minikube"}}
	{"specversion":"1.0","id":"108d1530-a4e9-436c-95df-aef8e2a96114","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"3ac67f8c-f6cd-4bbe-8e3e-c1f5cfea9ba3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6efca8c0-6bb7-40bc-b565-2e2bd0468a60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-263551" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-263551
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (48.14s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-376214 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-376214 --network=: (45.833412411s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-376214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-376214
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-376214: (2.279859s)
--- PASS: TestKicCustomNetwork/create_custom_network (48.14s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.22s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-784173 --network=bridge
E1124 13:37:02.197639    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/addons-384875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-784173 --network=bridge: (34.037454925s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-784173" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-784173
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-784173: (2.150532837s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.22s)

                                                
                                    
x
+
TestKicExistingNetwork (35.4s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1124 13:37:38.042207    4178 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1124 13:37:38.060307    4178 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1124 13:37:38.060383    4178 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1124 13:37:38.060399    4178 cli_runner.go:164] Run: docker network inspect existing-network
W1124 13:37:38.077665    4178 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1124 13:37:38.077706    4178 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1124 13:37:38.077722    4178 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1124 13:37:38.077824    4178 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1124 13:37:38.102849    4178 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e5e15b13860d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:3d:37:c4:cc:77} reservation:<nil>}
I1124 13:37:38.103106    4178 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017777c0}
I1124 13:37:38.103130    4178 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1124 13:37:38.103183    4178 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1124 13:37:38.166886    4178 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-905490 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-905490 --network=existing-network: (33.132580043s)
helpers_test.go:175: Cleaning up "existing-network-905490" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-905490
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-905490: (2.107975785s)
I1124 13:38:13.424629    4178 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.40s)

                                                
                                    
x
+
TestKicCustomSubnet (36.32s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-475326 --subnet=192.168.60.0/24
E1124 13:38:25.264102    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/addons-384875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-475326 --subnet=192.168.60.0/24: (34.112940912s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-475326 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-475326" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-475326
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-475326: (2.176824032s)
--- PASS: TestKicCustomSubnet (36.32s)

                                                
                                    
x
+
TestKicStaticIP (37.39s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-732764 --static-ip=192.168.200.200
E1124 13:39:13.430972    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/functional-659953/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-732764 --static-ip=192.168.200.200: (35.035018094s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-732764 ip
helpers_test.go:175: Cleaning up "static-ip-732764" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-732764
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-732764: (2.201395473s)
--- PASS: TestKicStaticIP (37.39s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (67.4s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-386695 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-386695 --driver=docker  --container-runtime=containerd: (30.29166735s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-389222 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-389222 --driver=docker  --container-runtime=containerd: (30.946244994s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-386695
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-389222
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-389222" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-389222
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-389222: (2.224013702s)
helpers_test.go:175: Cleaning up "first-386695" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-386695
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-386695: (2.430086266s)
--- PASS: TestMinikubeProfile (67.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-378405 --memory=3072 --mount-string /tmp/TestMountStartserial3453516998/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-378405 --memory=3072 --mount-string /tmp/TestMountStartserial3453516998/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.006199208s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-378405 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-380341 --memory=3072 --mount-string /tmp/TestMountStartserial3453516998/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-380341 --memory=3072 --mount-string /tmp/TestMountStartserial3453516998/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.200411469s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-380341 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-378405 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-378405 --alsologtostderr -v=5: (1.727197803s)
--- PASS: TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-380341 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-380341
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-380341: (1.281603307s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.1s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-380341
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-380341: (7.096807952s)
--- PASS: TestMountStart/serial/RestartStopped (8.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-380341 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (108.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-327561 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1124 13:42:02.197922    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/addons-384875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-327561 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m48.251599778s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (108.79s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-327561 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-327561 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-327561 -- rollout status deployment/busybox: (3.627615939s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-327561 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-327561 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-327561 -- exec busybox-7b57f96db7-4b9mk -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-327561 -- exec busybox-7b57f96db7-8t8dt -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-327561 -- exec busybox-7b57f96db7-4b9mk -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-327561 -- exec busybox-7b57f96db7-8t8dt -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-327561 -- exec busybox-7b57f96db7-4b9mk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-327561 -- exec busybox-7b57f96db7-8t8dt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.40s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-327561 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-327561 -- exec busybox-7b57f96db7-4b9mk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-327561 -- exec busybox-7b57f96db7-4b9mk -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-327561 -- exec busybox-7b57f96db7-8t8dt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-327561 -- exec busybox-7b57f96db7-8t8dt -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-327561 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-327561 -v=5 --alsologtostderr: (27.495963161s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.19s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-327561 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 cp testdata/cp-test.txt multinode-327561:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 ssh -n multinode-327561 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 cp multinode-327561:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1769304933/001/cp-test_multinode-327561.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 ssh -n multinode-327561 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 cp multinode-327561:/home/docker/cp-test.txt multinode-327561-m02:/home/docker/cp-test_multinode-327561_multinode-327561-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 ssh -n multinode-327561 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 ssh -n multinode-327561-m02 "sudo cat /home/docker/cp-test_multinode-327561_multinode-327561-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 cp multinode-327561:/home/docker/cp-test.txt multinode-327561-m03:/home/docker/cp-test_multinode-327561_multinode-327561-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 ssh -n multinode-327561 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 ssh -n multinode-327561-m03 "sudo cat /home/docker/cp-test_multinode-327561_multinode-327561-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 cp testdata/cp-test.txt multinode-327561-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 ssh -n multinode-327561-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 cp multinode-327561-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1769304933/001/cp-test_multinode-327561-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 ssh -n multinode-327561-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 cp multinode-327561-m02:/home/docker/cp-test.txt multinode-327561:/home/docker/cp-test_multinode-327561-m02_multinode-327561.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 ssh -n multinode-327561-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 ssh -n multinode-327561 "sudo cat /home/docker/cp-test_multinode-327561-m02_multinode-327561.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 cp multinode-327561-m02:/home/docker/cp-test.txt multinode-327561-m03:/home/docker/cp-test_multinode-327561-m02_multinode-327561-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 ssh -n multinode-327561-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 ssh -n multinode-327561-m03 "sudo cat /home/docker/cp-test_multinode-327561-m02_multinode-327561-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 cp testdata/cp-test.txt multinode-327561-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 ssh -n multinode-327561-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 cp multinode-327561-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1769304933/001/cp-test_multinode-327561-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 ssh -n multinode-327561-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 cp multinode-327561-m03:/home/docker/cp-test.txt multinode-327561:/home/docker/cp-test_multinode-327561-m03_multinode-327561.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 ssh -n multinode-327561-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 ssh -n multinode-327561 "sudo cat /home/docker/cp-test_multinode-327561-m03_multinode-327561.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 cp multinode-327561-m03:/home/docker/cp-test.txt multinode-327561-m02:/home/docker/cp-test_multinode-327561-m03_multinode-327561-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 ssh -n multinode-327561-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 ssh -n multinode-327561-m02 "sudo cat /home/docker/cp-test_multinode-327561-m03_multinode-327561-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.59s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-327561 node stop m03: (1.319872373s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-327561 status: exit status 7 (582.131505ms)

                                                
                                                
-- stdout --
	multinode-327561
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-327561-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-327561-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-327561 status --alsologtostderr: exit status 7 (544.912107ms)

                                                
                                                
-- stdout --
	multinode-327561
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-327561-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-327561-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:43:40.600478  126104 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:43:40.600626  126104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:43:40.600637  126104 out.go:374] Setting ErrFile to fd 2...
	I1124 13:43:40.600643  126104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:43:40.600904  126104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2368/.minikube/bin
	I1124 13:43:40.601168  126104 out.go:368] Setting JSON to false
	I1124 13:43:40.601202  126104 mustload.go:66] Loading cluster: multinode-327561
	I1124 13:43:40.601259  126104 notify.go:221] Checking for updates...
	I1124 13:43:40.601727  126104 config.go:182] Loaded profile config "multinode-327561": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:43:40.601742  126104 status.go:174] checking status of multinode-327561 ...
	I1124 13:43:40.602585  126104 cli_runner.go:164] Run: docker container inspect multinode-327561 --format={{.State.Status}}
	I1124 13:43:40.621874  126104 status.go:371] multinode-327561 host status = "Running" (err=<nil>)
	I1124 13:43:40.621899  126104 host.go:66] Checking if "multinode-327561" exists ...
	I1124 13:43:40.622201  126104 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-327561
	I1124 13:43:40.648380  126104 host.go:66] Checking if "multinode-327561" exists ...
	I1124 13:43:40.648671  126104 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:43:40.648718  126104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-327561
	I1124 13:43:40.666153  126104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/multinode-327561/id_rsa Username:docker}
	I1124 13:43:40.769837  126104 ssh_runner.go:195] Run: systemctl --version
	I1124 13:43:40.776495  126104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:43:40.789141  126104 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:43:40.861473  126104 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-24 13:43:40.851396878 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 13:43:40.862049  126104 kubeconfig.go:125] found "multinode-327561" server: "https://192.168.67.2:8443"
	I1124 13:43:40.862082  126104 api_server.go:166] Checking apiserver status ...
	I1124 13:43:40.862127  126104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:43:40.874464  126104 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1364/cgroup
	I1124 13:43:40.884148  126104 api_server.go:182] apiserver freezer: "4:freezer:/docker/b803f2347a1bf232ae077daea2f0a898cba7c6297297bc20421a61f711b84d60/kubepods/burstable/pod49616e8f36611b2d66b100a5453e6edf/f92b8899425a0769e7a25f9540c0b49fae82b6141dbe2f8b12c74a9b956a3c6a"
	I1124 13:43:40.884235  126104 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b803f2347a1bf232ae077daea2f0a898cba7c6297297bc20421a61f711b84d60/kubepods/burstable/pod49616e8f36611b2d66b100a5453e6edf/f92b8899425a0769e7a25f9540c0b49fae82b6141dbe2f8b12c74a9b956a3c6a/freezer.state
	I1124 13:43:40.891774  126104 api_server.go:204] freezer state: "THAWED"
	I1124 13:43:40.891846  126104 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1124 13:43:40.900169  126104 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1124 13:43:40.900197  126104 status.go:463] multinode-327561 apiserver status = Running (err=<nil>)
	I1124 13:43:40.900207  126104 status.go:176] multinode-327561 status: &{Name:multinode-327561 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:43:40.900224  126104 status.go:174] checking status of multinode-327561-m02 ...
	I1124 13:43:40.900549  126104 cli_runner.go:164] Run: docker container inspect multinode-327561-m02 --format={{.State.Status}}
	I1124 13:43:40.917285  126104 status.go:371] multinode-327561-m02 host status = "Running" (err=<nil>)
	I1124 13:43:40.917310  126104 host.go:66] Checking if "multinode-327561-m02" exists ...
	I1124 13:43:40.917657  126104 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-327561-m02
	I1124 13:43:40.934573  126104 host.go:66] Checking if "multinode-327561-m02" exists ...
	I1124 13:43:40.934893  126104 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:43:40.934930  126104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-327561-m02
	I1124 13:43:40.952480  126104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21932-2368/.minikube/machines/multinode-327561-m02/id_rsa Username:docker}
	I1124 13:43:41.057648  126104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:43:41.070445  126104 status.go:176] multinode-327561-m02 status: &{Name:multinode-327561-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:43:41.070478  126104 status.go:174] checking status of multinode-327561-m03 ...
	I1124 13:43:41.070781  126104 cli_runner.go:164] Run: docker container inspect multinode-327561-m03 --format={{.State.Status}}
	I1124 13:43:41.088145  126104 status.go:371] multinode-327561-m03 host status = "Stopped" (err=<nil>)
	I1124 13:43:41.088169  126104 status.go:384] host is not running, skipping remaining checks
	I1124 13:43:41.088176  126104 status.go:176] multinode-327561-m03 status: &{Name:multinode-327561-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.45s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-327561 node start m03 -v=5 --alsologtostderr: (7.203676288s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (73.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-327561
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-327561
E1124 13:44:13.429921    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/functional-659953/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-327561: (25.189479025s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-327561 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-327561 --wait=true -v=5 --alsologtostderr: (47.804323651s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-327561
--- PASS: TestMultiNode/serial/RestartKeepsNodes (73.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-327561 node delete m03: (5.025062096s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-327561 stop: (23.955456136s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-327561 status: exit status 7 (96.514364ms)

                                                
                                                
-- stdout --
	multinode-327561
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-327561-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-327561 status --alsologtostderr: exit status 7 (89.286204ms)

                                                
                                                
-- stdout --
	multinode-327561
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-327561-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:45:32.074160  134839 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:45:32.074311  134839 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:45:32.074334  134839 out.go:374] Setting ErrFile to fd 2...
	I1124 13:45:32.074351  134839 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:45:32.074623  134839 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2368/.minikube/bin
	I1124 13:45:32.074831  134839 out.go:368] Setting JSON to false
	I1124 13:45:32.074881  134839 mustload.go:66] Loading cluster: multinode-327561
	I1124 13:45:32.074947  134839 notify.go:221] Checking for updates...
	I1124 13:45:32.076218  134839 config.go:182] Loaded profile config "multinode-327561": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:45:32.076245  134839 status.go:174] checking status of multinode-327561 ...
	I1124 13:45:32.077898  134839 cli_runner.go:164] Run: docker container inspect multinode-327561 --format={{.State.Status}}
	I1124 13:45:32.096336  134839 status.go:371] multinode-327561 host status = "Stopped" (err=<nil>)
	I1124 13:45:32.096363  134839 status.go:384] host is not running, skipping remaining checks
	I1124 13:45:32.096370  134839 status.go:176] multinode-327561 status: &{Name:multinode-327561 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:45:32.096395  134839 status.go:174] checking status of multinode-327561-m02 ...
	I1124 13:45:32.096704  134839 cli_runner.go:164] Run: docker container inspect multinode-327561-m02 --format={{.State.Status}}
	I1124 13:45:32.114372  134839 status.go:371] multinode-327561-m02 host status = "Stopped" (err=<nil>)
	I1124 13:45:32.114398  134839 status.go:384] host is not running, skipping remaining checks
	I1124 13:45:32.114405  134839 status.go:176] multinode-327561-m02 status: &{Name:multinode-327561-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-327561 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1124 13:45:36.507852    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/functional-659953/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-327561 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (48.714862935s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-327561 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.41s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-327561
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-327561-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-327561-m02 --driver=docker  --container-runtime=containerd: exit status 14 (93.213724ms)

                                                
                                                
-- stdout --
	* [multinode-327561-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21932-2368/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2368/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-327561-m02' is duplicated with machine name 'multinode-327561-m02' in profile 'multinode-327561'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-327561-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-327561-m03 --driver=docker  --container-runtime=containerd: (34.040492682s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-327561
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-327561: exit status 80 (334.689241ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-327561 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-327561-m03 already exists in multinode-327561-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-327561-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-327561-m03: (2.069066678s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.59s)

                                                
                                    
x
+
TestPreload (120.76s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-302819 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-302819 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (59.728457021s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-302819 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-302819 image pull gcr.io/k8s-minikube/busybox: (2.473080898s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-302819
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-302819: (5.879152792s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-302819 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-302819 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (49.965456454s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-302819 image list
helpers_test.go:175: Cleaning up "test-preload-302819" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-302819
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-302819: (2.4770473s)
--- PASS: TestPreload (120.76s)

                                                
                                    
x
+
TestScheduledStopUnix (112.02s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-432567 --memory=3072 --driver=docker  --container-runtime=containerd
E1124 13:49:13.430980    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/functional-659953/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-432567 --memory=3072 --driver=docker  --container-runtime=containerd: (35.794606774s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-432567 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 13:49:39.269977  150708 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:49:39.270195  150708 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:49:39.270226  150708 out.go:374] Setting ErrFile to fd 2...
	I1124 13:49:39.270251  150708 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:49:39.270523  150708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2368/.minikube/bin
	I1124 13:49:39.270793  150708 out.go:368] Setting JSON to false
	I1124 13:49:39.270934  150708 mustload.go:66] Loading cluster: scheduled-stop-432567
	I1124 13:49:39.279034  150708 config.go:182] Loaded profile config "scheduled-stop-432567": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:49:39.279421  150708 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/scheduled-stop-432567/config.json ...
	I1124 13:49:39.280535  150708 mustload.go:66] Loading cluster: scheduled-stop-432567
	I1124 13:49:39.281124  150708 config.go:182] Loaded profile config "scheduled-stop-432567": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-432567 -n scheduled-stop-432567
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-432567 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 13:49:39.705179  150799 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:49:39.705340  150799 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:49:39.705354  150799 out.go:374] Setting ErrFile to fd 2...
	I1124 13:49:39.705359  150799 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:49:39.705604  150799 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2368/.minikube/bin
	I1124 13:49:39.705840  150799 out.go:368] Setting JSON to false
	I1124 13:49:39.706045  150799 daemonize_unix.go:73] killing process 150725 as it is an old scheduled stop
	I1124 13:49:39.708069  150799 mustload.go:66] Loading cluster: scheduled-stop-432567
	I1124 13:49:39.708583  150799 config.go:182] Loaded profile config "scheduled-stop-432567": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:49:39.708710  150799 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/scheduled-stop-432567/config.json ...
	I1124 13:49:39.709136  150799 mustload.go:66] Loading cluster: scheduled-stop-432567
	I1124 13:49:39.709316  150799 config.go:182] Loaded profile config "scheduled-stop-432567": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1124 13:49:39.714809    4178 retry.go:31] will retry after 118.324µs: open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/scheduled-stop-432567/pid: no such file or directory
I1124 13:49:39.716032    4178 retry.go:31] will retry after 123.558µs: open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/scheduled-stop-432567/pid: no such file or directory
I1124 13:49:39.717192    4178 retry.go:31] will retry after 279.179µs: open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/scheduled-stop-432567/pid: no such file or directory
I1124 13:49:39.717604    4178 retry.go:31] will retry after 391.064µs: open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/scheduled-stop-432567/pid: no such file or directory
I1124 13:49:39.718750    4178 retry.go:31] will retry after 743.954µs: open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/scheduled-stop-432567/pid: no such file or directory
I1124 13:49:39.719883    4178 retry.go:31] will retry after 1.104922ms: open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/scheduled-stop-432567/pid: no such file or directory
I1124 13:49:39.721065    4178 retry.go:31] will retry after 803.056µs: open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/scheduled-stop-432567/pid: no such file or directory
I1124 13:49:39.722182    4178 retry.go:31] will retry after 2.43802ms: open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/scheduled-stop-432567/pid: no such file or directory
I1124 13:49:39.725396    4178 retry.go:31] will retry after 2.420466ms: open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/scheduled-stop-432567/pid: no such file or directory
I1124 13:49:39.728316    4178 retry.go:31] will retry after 2.084989ms: open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/scheduled-stop-432567/pid: no such file or directory
I1124 13:49:39.731535    4178 retry.go:31] will retry after 8.583663ms: open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/scheduled-stop-432567/pid: no such file or directory
I1124 13:49:39.740765    4178 retry.go:31] will retry after 9.436532ms: open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/scheduled-stop-432567/pid: no such file or directory
I1124 13:49:39.750412    4178 retry.go:31] will retry after 17.516008ms: open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/scheduled-stop-432567/pid: no such file or directory
I1124 13:49:39.768593    4178 retry.go:31] will retry after 22.412731ms: open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/scheduled-stop-432567/pid: no such file or directory
I1124 13:49:39.791823    4178 retry.go:31] will retry after 28.952896ms: open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/scheduled-stop-432567/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-432567 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-432567 -n scheduled-stop-432567
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-432567
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-432567 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 13:50:05.664371  151473 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:50:05.664493  151473 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:50:05.664506  151473 out.go:374] Setting ErrFile to fd 2...
	I1124 13:50:05.664512  151473 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:50:05.664751  151473 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2368/.minikube/bin
	I1124 13:50:05.664980  151473 out.go:368] Setting JSON to false
	I1124 13:50:05.665079  151473 mustload.go:66] Loading cluster: scheduled-stop-432567
	I1124 13:50:05.665517  151473 config.go:182] Loaded profile config "scheduled-stop-432567": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:50:05.665591  151473 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/scheduled-stop-432567/config.json ...
	I1124 13:50:05.665816  151473 mustload.go:66] Loading cluster: scheduled-stop-432567
	I1124 13:50:05.665938  151473 config.go:182] Loaded profile config "scheduled-stop-432567": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-432567
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-432567: exit status 7 (66.463415ms)

                                                
                                                
-- stdout --
	scheduled-stop-432567
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-432567 -n scheduled-stop-432567
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-432567 -n scheduled-stop-432567: exit status 7 (67.5455ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-432567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-432567
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-432567: (4.638962338s)
--- PASS: TestScheduledStopUnix (112.02s)

                                                
                                    
x
+
TestInsufficientStorage (13.04s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-550492 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-550492 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.449380417s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1ce901a8-0827-4cc2-89f6-5af042999788","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-550492] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cfbd2ced-7cf8-4a08-a5b9-d2317e2edbfa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21932"}}
	{"specversion":"1.0","id":"f4fb4728-905f-40ce-b0f7-c788ffa28a80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"266e46ab-4135-4c3c-825b-c262322f7bcb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21932-2368/kubeconfig"}}
	{"specversion":"1.0","id":"b74cf54a-c4d2-4c36-a512-7b6c1f95f590","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2368/.minikube"}}
	{"specversion":"1.0","id":"2bc76148-46e8-42d7-8fa0-bfc1b199b995","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"6af036e5-dba5-48bc-8842-f0bd4f1896d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1e928be9-707a-4725-9585-c82134fa31fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"d627ded9-8707-421a-8842-d60d03b37fab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f08552b6-36f2-4b0c-8d11-def682e26efe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"db482148-d150-44ae-afbb-c1f08011565d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"6ac4729d-1d5e-4959-a954-f9b2e8056739","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-550492\" primary control-plane node in \"insufficient-storage-550492\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a72b93fd-cd49-463c-bcc1-e5278a78518b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763789673-21948 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"bdfb897a-19ce-4729-ac75-38f403a934b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6db37c92-93f4-4036-a8f4-8b98cd24b7e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-550492 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-550492 --output=json --layout=cluster: exit status 7 (315.362156ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-550492","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-550492","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1124 13:51:06.198153  153114 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-550492" does not appear in /home/jenkins/minikube-integration/21932-2368/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-550492 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-550492 --output=json --layout=cluster: exit status 7 (309.553339ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-550492","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-550492","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1124 13:51:06.506100  153179 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-550492" does not appear in /home/jenkins/minikube-integration/21932-2368/kubeconfig
	E1124 13:51:06.517197  153179 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/insufficient-storage-550492/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-550492" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-550492
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-550492: (1.961759056s)
--- PASS: TestInsufficientStorage (13.04s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (70.06s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3818617898 start -p running-upgrade-545664 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1124 13:55:05.267118    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/addons-384875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3818617898 start -p running-upgrade-545664 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (33.661274411s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-545664 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-545664 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (26.33642187s)
helpers_test.go:175: Cleaning up "running-upgrade-545664" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-545664
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-545664: (2.139982618s)
--- PASS: TestRunningBinaryUpgrade (70.06s)

                                                
                                    
x
+
TestKubernetesUpgrade (361.88s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-758885 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-758885 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (40.89185285s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-758885
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-758885: (1.331512641s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-758885 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-758885 status --format={{.Host}}: exit status 7 (73.438903ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-758885 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-758885 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m58.904294072s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-758885 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-758885 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-758885 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (108.170616ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-758885] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21932-2368/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2368/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-758885
	    minikube start -p kubernetes-upgrade-758885 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7588852 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-758885 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-758885 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-758885 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (17.681506623s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-758885" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-758885
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-758885: (2.738140917s)
--- PASS: TestKubernetesUpgrade (361.88s)

                                                
                                    
x
+
TestMissingContainerUpgrade (141.02s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.113576342 start -p missing-upgrade-706064 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.113576342 start -p missing-upgrade-706064 --memory=3072 --driver=docker  --container-runtime=containerd: (1m6.671132307s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-706064
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-706064
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-706064 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-706064 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m8.50969151s)
helpers_test.go:175: Cleaning up "missing-upgrade-706064" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-706064
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-706064: (2.169447101s)
--- PASS: TestMissingContainerUpgrade (141.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-747664 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-747664 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (93.055607ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-747664] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21932-2368/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2368/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-747664 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-747664 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (38.702350368s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-747664 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-747664 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1124 13:52:02.197988    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/addons-384875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-747664 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (14.893840963s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-747664 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-747664 status -o json: exit status 2 (377.851535ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-747664","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-747664
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-747664: (2.329123869s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-747664 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-747664 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (10.05341497s)
--- PASS: TestNoKubernetes/serial/Start (10.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21932-2368/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-747664 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-747664 "sudo systemctl is-active --quiet service kubelet": exit status 1 (429.433313ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-747664
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-747664: (1.455629049s)
--- PASS: TestNoKubernetes/serial/Stop (1.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-747664 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-747664 --driver=docker  --container-runtime=containerd: (6.560808089s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-747664 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-747664 "sudo systemctl is-active --quiet service kubelet": exit status 1 (267.675356ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.99s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (55.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.243855354 start -p stopped-upgrade-655256 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.243855354 start -p stopped-upgrade-655256 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (34.930275239s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.243855354 -p stopped-upgrade-655256 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.243855354 -p stopped-upgrade-655256 stop: (1.242146275s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-655256 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1124 13:54:13.430118    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/functional-659953/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-655256 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (19.477473127s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (55.65s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-655256
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-655256: (1.421091625s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.42s)

                                                
                                    
x
+
TestPause/serial/Start (82.13s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-148400 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E1124 13:57:02.197396    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/addons-384875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-148400 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m22.133938778s)
--- PASS: TestPause/serial/Start (82.13s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.55s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-148400 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-148400 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.528536379s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.55s)

                                                
                                    
x
+
TestPause/serial/Pause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-148400 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.72s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-148400 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-148400 --output=json --layout=cluster: exit status 2 (412.410014ms)

                                                
                                                
-- stdout --
	{"Name":"pause-148400","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-148400","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.41s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.83s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-148400 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.83s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.84s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-148400 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.84s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.85s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-148400 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-148400 --alsologtostderr -v=5: (2.853926003s)
--- PASS: TestPause/serial/DeletePaused (2.85s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-148400
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-148400: exit status 1 (18.097128ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-148400: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-803934 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-803934 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (296.940829ms)

                                                
                                                
-- stdout --
	* [false-803934] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21932-2368/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2368/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:58:06.578203  192166 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:58:06.578305  192166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:58:06.578311  192166 out.go:374] Setting ErrFile to fd 2...
	I1124 13:58:06.578315  192166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:58:06.578690  192166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-2368/.minikube/bin
	I1124 13:58:06.579677  192166 out.go:368] Setting JSON to false
	I1124 13:58:06.581274  192166 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6036,"bootTime":1763986651,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1124 13:58:06.581350  192166 start.go:143] virtualization:  
	I1124 13:58:06.584974  192166 out.go:179] * [false-803934] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 13:58:06.588813  192166 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:58:06.588995  192166 notify.go:221] Checking for updates...
	I1124 13:58:06.596064  192166 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:58:06.598983  192166 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-2368/kubeconfig
	I1124 13:58:06.601839  192166 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-2368/.minikube
	I1124 13:58:06.604984  192166 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 13:58:06.607849  192166 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:58:06.611367  192166 config.go:182] Loaded profile config "kubernetes-upgrade-758885": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 13:58:06.611484  192166 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:58:06.641613  192166 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 13:58:06.641728  192166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:58:06.744514  192166 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 13:58:06.73376789 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 13:58:06.744626  192166 docker.go:319] overlay module found
	I1124 13:58:06.748109  192166 out.go:179] * Using the docker driver based on user configuration
	I1124 13:58:06.751971  192166 start.go:309] selected driver: docker
	I1124 13:58:06.752003  192166 start.go:927] validating driver "docker" against <nil>
	I1124 13:58:06.752017  192166 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:58:06.755736  192166 out.go:203] 
	W1124 13:58:06.760734  192166 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1124 13:58:06.763712  192166 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-803934 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-803934

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-803934

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-803934

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-803934

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-803934

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-803934

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-803934

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-803934

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-803934

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-803934

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-803934

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-803934" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-803934" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21932-2368/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 13:58:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-758885
contexts:
- context:
cluster: kubernetes-upgrade-758885
extensions:
- extension:
last-update: Mon, 24 Nov 2025 13:58:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-758885
name: kubernetes-upgrade-758885
current-context: kubernetes-upgrade-758885
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-758885
user:
client-certificate: /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/kubernetes-upgrade-758885/client.crt
client-key: /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/kubernetes-upgrade-758885/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-803934

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-803934"

                                                
                                                
----------------------- debugLogs end: false-803934 [took: 5.145590521s] --------------------------------
helpers_test.go:175: Cleaning up "false-803934" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-803934
--- PASS: TestNetworkPlugins/group/false (5.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (59.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-318786 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-318786 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (59.876504125s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (59.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-318786 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-318786 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.083646965s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-318786 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-318786 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-318786 --alsologtostderr -v=3: (12.134544611s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-318786 -n old-k8s-version-318786
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-318786 -n old-k8s-version-318786: exit status 7 (72.604146ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-318786 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (49.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-318786 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-318786 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (48.691228228s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-318786 -n old-k8s-version-318786
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (49.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-fj5rm" [4dbae30f-7a71-47f9-8177-cf95cdbd22a0] Running
E1124 14:02:02.198278    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/addons-384875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004133375s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-fj5rm" [4dbae30f-7a71-47f9-8177-cf95cdbd22a0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004216774s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-318786 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-318786 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-318786 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-318786 -n old-k8s-version-318786
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-318786 -n old-k8s-version-318786: exit status 2 (461.438922ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-318786 -n old-k8s-version-318786
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-318786 -n old-k8s-version-318786: exit status 2 (428.966115ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-318786 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-318786 -n old-k8s-version-318786
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-318786 -n old-k8s-version-318786
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-609438 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-609438 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m22.545273406s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (86.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-593634 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-593634 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m26.564992446s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (86.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-609438 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-609438 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.073170455s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-609438 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-609438 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-609438 --alsologtostderr -v=3: (12.195905264s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-593634 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-593634 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-593634 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-593634 --alsologtostderr -v=3: (12.583647749s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-609438 -n default-k8s-diff-port-609438
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-609438 -n default-k8s-diff-port-609438: exit status 7 (133.445896ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-609438 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-609438 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1124 14:04:13.430865    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/functional-659953/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-609438 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (51.641469378s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-609438 -n default-k8s-diff-port-609438
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-593634 -n embed-certs-593634
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-593634 -n embed-certs-593634: exit status 7 (84.584117ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-593634 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (52.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-593634 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-593634 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (52.099246982s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-593634 -n embed-certs-593634
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (52.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vfm92" [e5007c36-cfd3-4538-a1d5-4952108d2ba6] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00356904s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vfm92" [e5007c36-cfd3-4538-a1d5-4952108d2ba6] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006195182s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-609438 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-609438 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-609438 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-609438 -n default-k8s-diff-port-609438
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-609438 -n default-k8s-diff-port-609438: exit status 2 (356.44037ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-609438 -n default-k8s-diff-port-609438
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-609438 -n default-k8s-diff-port-609438: exit status 2 (355.216906ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-609438 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-609438 -n default-k8s-diff-port-609438
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-609438 -n default-k8s-diff-port-609438
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-srn6j" [1f89c045-1f0b-4a26-9d43-187d1eaf9742] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003305965s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (73.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-694102 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-694102 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m13.820701906s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (73.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-srn6j" [1f89c045-1f0b-4a26-9d43-187d1eaf9742] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003344659s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-593634 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-593634 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-593634 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-593634 -n embed-certs-593634
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-593634 -n embed-certs-593634: exit status 2 (437.907925ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-593634 -n embed-certs-593634
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-593634 -n embed-certs-593634: exit status 2 (423.875788ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-593634 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-593634 -n embed-certs-593634
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-593634 -n embed-certs-593634
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-857121 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1124 14:05:42.441280    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:05:42.448082    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:05:42.459422    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:05:42.480804    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:05:42.522168    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:05:42.603557    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:05:42.765097    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:05:43.086459    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:05:43.728179    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:05:45.011517    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:05:47.572829    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:05:52.695391    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:06:02.936690    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-857121 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (47.720460995s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-857121 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-857121 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.076755882s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-857121 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-857121 --alsologtostderr -v=3: (1.373805428s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-857121 -n newest-cni-857121
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-857121 -n newest-cni-857121: exit status 7 (71.988112ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-857121 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-857121 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1124 14:06:23.418628    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-857121 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (15.963615966s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-857121 -n newest-cni-857121
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-857121 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-857121 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-857121 -n newest-cni-857121
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-857121 -n newest-cni-857121: exit status 2 (340.274117ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-857121 -n newest-cni-857121
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-857121 -n newest-cni-857121: exit status 2 (350.214316ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-857121 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-857121 -n newest-cni-857121
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-857121 -n newest-cni-857121
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (90.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-803934 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-803934 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m30.145049134s)
--- PASS: TestNetworkPlugins/group/auto/Start (90.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-694102 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-694102 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.270892371s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-694102 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-694102 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-694102 --alsologtostderr -v=3: (12.345657675s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-694102 -n no-preload-694102
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-694102 -n no-preload-694102: exit status 7 (104.415715ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-694102 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (56.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-694102 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1124 14:07:02.198050    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/addons-384875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:07:04.380574    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-694102 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (55.997860949s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-694102 -n no-preload-694102
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (56.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bnmfq" [adb4f826-7c8e-401d-9dda-42d68d2f491e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003542896s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bnmfq" [adb4f826-7c8e-401d-9dda-42d68d2f491e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003271465s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-694102 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-694102 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-694102 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-694102 -n no-preload-694102
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-694102 -n no-preload-694102: exit status 2 (331.72571ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-694102 -n no-preload-694102
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-694102 -n no-preload-694102: exit status 2 (341.705206ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-694102 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-694102 -n no-preload-694102
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-694102 -n no-preload-694102
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-803934 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-803934 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lhsqr" [64f4bf77-0ce1-438a-8dbc-3ae131f8d3ec] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lhsqr" [64f4bf77-0ce1-438a-8dbc-3ae131f8d3ec] Running
E1124 14:08:26.302693    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004607163s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (90.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-803934 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-803934 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m30.389489727s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (90.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-803934 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-803934 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-803934 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-803934 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1124 14:09:02.084084    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:09:13.430807    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/functional-659953/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:09:22.566088    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-803934 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m1.628566971s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-h5pjk" [0a963251-6b75-44a9-9d57-dbdf5c905c38] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003682908s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-803934 "pgrep -a kubelet"
I1124 14:09:53.221745    4178 config.go:182] Loaded profile config "kindnet-803934": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-803934 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rk7tk" [8ec4b429-bc28-409f-934a-b5634863d1d2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rk7tk" [8ec4b429-bc28-409f-934a-b5634863d1d2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003187438s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-vbvk6" [3855cac7-3575-4dac-a2de-9f1e506bcffe] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003782382s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-803934 "pgrep -a kubelet"
I1124 14:10:01.609873    4178 config.go:182] Loaded profile config "flannel-803934": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-803934 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ccskj" [c3f6bddd-fc40-418d-88a1-7ad9452c706e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ccskj" [c3f6bddd-fc40-418d-88a1-7ad9452c706e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004259053s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-803934 exec deployment/netcat -- nslookup kubernetes.default
E1124 14:10:03.527458    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-803934 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-803934 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-803934 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-803934 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-803934 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (56.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-803934 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-803934 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (56.886782663s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (56.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (51.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-803934 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1124 14:10:42.441532    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:11:10.144432    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/old-k8s-version-318786/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-803934 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (51.036069517s)
--- PASS: TestNetworkPlugins/group/bridge/Start (51.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-803934 "pgrep -a kubelet"
E1124 14:11:25.449359    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/default-k8s-diff-port-609438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1124 14:11:25.493673    4178 config.go:182] Loaded profile config "enable-default-cni-803934": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-803934 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vvfk5" [a000b057-e170-47d2-bec2-f38fb71acb2a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vvfk5" [a000b057-e170-47d2-bec2-f38fb71acb2a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003138234s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-803934 "pgrep -a kubelet"
I1124 14:11:31.163032    4178 config.go:182] Loaded profile config "bridge-803934": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-803934 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mzbtz" [29197009-6686-4201-9231-80e2f78cb520] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1124 14:11:32.929789    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/no-preload-694102/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:11:32.936171    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/no-preload-694102/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:11:32.947881    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/no-preload-694102/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:11:32.969618    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/no-preload-694102/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:11:33.011058    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/no-preload-694102/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:11:33.092690    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/no-preload-694102/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:11:33.254152    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/no-preload-694102/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:11:33.575585    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/no-preload-694102/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:11:34.217016    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/no-preload-694102/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-mzbtz" [29197009-6686-4201-9231-80e2f78cb520] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004283663s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-803934 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-803934 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-803934 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-803934 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-803934 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-803934 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (79.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-803934 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E1124 14:12:02.197491    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/addons-384875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-803934 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m19.334796011s)
--- PASS: TestNetworkPlugins/group/calico/Start (79.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (68.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-803934 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E1124 14:12:13.908987    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/no-preload-694102/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:12:54.870681    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/no-preload-694102/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-803934 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m8.102111073s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (68.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-803934 "pgrep -a kubelet"
I1124 14:13:15.609582    4178 config.go:182] Loaded profile config "custom-flannel-803934": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-803934 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-98js9" [6d128b3b-549f-4d4a-953e-663727403953] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1124 14:13:16.671078    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/auto-803934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:13:16.677488    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/auto-803934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:13:16.689656    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/auto-803934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:13:16.711245    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/auto-803934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:13:16.752854    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/auto-803934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:13:16.834414    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/auto-803934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:13:16.996460    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/auto-803934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:13:17.318686    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/auto-803934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:13:17.960968    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/auto-803934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-98js9" [6d128b3b-549f-4d4a-953e-663727403953] Running
E1124 14:13:19.243015    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/auto-803934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:13:21.804489    4178 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/auto-803934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.00417909s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-txmxh" [99e01a49-0295-46c8-9a82-cbcf47ba569c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004523795s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-803934 "pgrep -a kubelet"
I1124 14:13:24.902900    4178 config.go:182] Loaded profile config "calico-803934": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-803934 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fh4jn" [0a13db80-8745-478e-a593-9ccaca6beb02] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fh4jn" [0a13db80-8745-478e-a593-9ccaca6beb02] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004498302s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-803934 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-803934 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-803934 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-803934 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-803934 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-803934 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    

Test skip (30/333)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.45s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-673434 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-673434" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-673434
--- SKIP: TestDownloadOnlyKic (0.45s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-073831" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-073831
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-803934 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-803934

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-803934

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-803934

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-803934

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-803934

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-803934

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-803934

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-803934

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-803934

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-803934

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-803934

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-803934" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-803934" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21932-2368/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 13:53:26 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-758885
contexts:
- context:
cluster: kubernetes-upgrade-758885
user: kubernetes-upgrade-758885
name: kubernetes-upgrade-758885
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-758885
user:
client-certificate: /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/kubernetes-upgrade-758885/client.crt
client-key: /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/kubernetes-upgrade-758885/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-803934

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-803934"

                                                
                                                
----------------------- debugLogs end: kubenet-803934 [took: 5.178935145s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-803934" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-803934
--- SKIP: TestNetworkPlugins/group/kubenet (5.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-803934 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-803934

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-803934

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-803934

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-803934

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-803934

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-803934

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-803934

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-803934

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-803934

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-803934

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-803934

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-803934" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-803934

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-803934

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-803934

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-803934

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-803934" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-803934" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21932-2368/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 13:58:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-758885
contexts:
- context:
cluster: kubernetes-upgrade-758885
extensions:
- extension:
last-update: Mon, 24 Nov 2025 13:58:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-758885
name: kubernetes-upgrade-758885
current-context: kubernetes-upgrade-758885
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-758885
user:
client-certificate: /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/kubernetes-upgrade-758885/client.crt
client-key: /home/jenkins/minikube-integration/21932-2368/.minikube/profiles/kubernetes-upgrade-758885/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-803934

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-803934" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803934"

                                                
                                                
----------------------- debugLogs end: cilium-803934 [took: 5.711200557s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-803934" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-803934
--- SKIP: TestNetworkPlugins/group/cilium (5.99s)

                                                
                                    
Copied to clipboard